Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
2 Summary of Workshop Discussions SESSION 1: Process, architecture, and the grand scale Panelists: John Vu, Boeing, and Rick Selby, Northrop Grumman Corporation Moderator: Michael Cusumano Panelist presentations and general discussions at this session were intended to explore the following questions from the perspectives of soft- ware development for government and commercial aerospace systems: â¢ What are the characteristics of successful approaches to architec- ture and design for large-scale systems and families of systems? â¢ Which architecture ideas can apply when systems must evolve rapidly? â¢ What kinds of management and measurement approaches could guide program managers and developers? Synergies Across Software Technologies and Business Practices Enable Successful Large-Scale Systems Context matters in trying to determine the characteristics of success- ful approachesâdifferent customer relationships, goals and needs, pacing of projects, and degree of precedent all require different practices. For example, different best practices may apply depending on what sort of
SUMMARY OF WORKSHOP DISCUSSIONS system or application is under development. Examples discussed include commercial software products, IT and Internet financial services, air- planes, and government aerospace systems. â¢ Different systems and software engineering organizations have different customers and strategies. They may produce a variety of deliverables, such as a piece of software, an integrated hardware-software environment, or very large, complicated, interconnected, hardware-software networked systems. â¢ Different systems and software engineering organizations have different goals and needs. Product purposes varyâuser empowerment, business operations, and mission capabilities. Projects can last from a month to 10 or 12 years. The project team can be one person or literally thousands. The customer agreement can be a license, service-level agreement, or contract. There can be a million customers or just oneâfor example, the government. The managerial focus can be on features and time to market; cycle time, workflow, and uptime; or reliability, contract milestones, and interdependencies; and so on. â¢ While some best practices, such as requirements and design reviews and incremental and spiral life cycles, are broadly applicable, specific practice usu- ally varies. Although risk management is broadly applicable, commercial, financial, and government system developers may adopt different kinds of risk management. While government aerospace systems developers may spend months or years doing extensive system modeling, this may not be possible in other organizations or for other types of products. Com- mercial software organizations may focus on daily builds (that is, each day compiling and possibly testing the entire program or system incor- porating any new changes or fixes); for aerospace systems, the focus may be on weekly or 60-day builds. Other generally applicable best practices that vary by market and organization include parallel small teams, design reuse, domain-specific languages, opportunity management, trade-off studies, and portability layers. These differences are driven by the differ- ent kinds of risks that drive engineering decisions in these sectors. â¢ Government aerospace systems developers, along with other very large software-development enterprises, employ some distinctive best practices. These include independent testing teams and, for some aspects of the systems under consideration, deterministic, simple designs. These practices are driven by a combination of engineering, risk-management, and contrac- tual considerations. In a very large organization, synergy across software technologies and business practices can contribute to success. Participants explored Very large in this case means over 100,000 employees throughout a supply chain doing systems engineering, systems development, and systems management; managing multiple product lines; and building systems with millions of lines of code.
SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE the particular case of moderately precedented systems and major com- ponents with control-loop architectures. For systems of this kind there are technology and business practice synergies that have worked well. Here are some examples noted by speakers: â¢ Decomposition of large systems to manage risk. With projects that typi- cally take between 6 and as many as 24 months to deliver, incremental decomposition of the system can reduce risk, provide better visibility into the process, and deliver capability over time. Decomposition accelerates system integration and testing. â¢ Table-based design, oriented to a system engineering view in terms of states and transitions, both nominal and off-nominal. This enables the use of clear, table-driven approaches to address nominal modes, failure modes, transition phases, and different operations at different parts of the system operations. â¢ Use of built-in, domain-specific (macro) languages in a layered architec- ture. The built-in, command-sequencing macro language defines table- driven executable specifications. This permits a relatively stable infra- structure and a run-time system with low-level, highly deterministic designs yet extensible functionality. It also allows automated testing of the systems. â¢ Use of precedented and well-defined architectures for the task manage- ment structure that incorporates a simple task structure, deterministic process- ing, and predictable timelines. For example, a typical three-task management structure might have high-rate (32 ms) tasks, minor-cycle (128 ms) tasks, and background tasks. The minor cycle reads and executes commands, formats telemetry, handles fault protection, and so forth. The high-rate cycle handles message traffic between the processors. The background cycle adds capability that takes a longer processing time. This is a Âreusable processing architecture that has been used for over 30 years in space- craft and is aimed at the construction of highly reliable, deterministic systems. â¢ Gaining advantages from lack of fault proneness in reused components by achieving high levels of code, design, and requirement reuse. One example of code reuse was this: Across 25 NASA ground systems, 32 percent of software components were either reused or modified from previous sys- tems (for spacecraft, reuse was said to be as high as 80 percent). Designs and requirements can also be reused. Typically, there is a large backward â Precedent refers to the extent to which we have experience with systems of a similar kind.Â More specifically, there can be precedent with respect to requirements, architecture and de- sign, infrastructure choices, and so on.Â Building on precedent leads to routinization, reduced engineering risk, and better predictability (lower variance) of engineering outcomes.Â
SUMMARY OF WORKSHOP DISCUSSIONS compatibility set of requirements, and these requirements can be reused. Requirements reuse is very common and very successful even though the design and implementation might each be achieved differently. Design reuse might involve allocation of function across processors in terms of how particular algorithms are structured and implemented. The functions might be implemented differently in the new system, for example, in com- ponents rather than custom code or in different programming languages. This is an example of true design reuse rather than code reuse. In addition to these synergies, it was suggested that other types of analyses could also contribute to successful projects. Data-driven statisti- cal analyses can help to identify trends, outliers, and process improve- ments to reduce or mitigate defects. For example, higher rates of compo- nent interactions tend to be correlated with more faults, as well as more fault-correction effort. Risk analyses prioritize risks according to cost, schedule, and technical safety impacts. Charts that show project risk mitigation over time and desired milestones help to define specific tasks and completion criteria. It was suggested that each individual risk almost becomes a microcosm of its own project, with schedules and milestones and progressive mitigation of that risk to add value. One approach to addressing the challenge of scale is to divide and conquer. Of course, arriving at an architectural design that supports decomposition is a prerequisite for this approach, which can apply across many kinds of systems development efforts. Suggestions included the following: â¢ Divide the organization into parallel teams.â Divide very large 1,000- person teams into parallel teams; establish a project rhythm of design cycles and incremental releases. This division of effort is often based on a system architectural structure that supports separate development paths (an example of what is known as Conwayâs lawâthat software system structures tend to reflect the structures of the organizations they are devel- oped by). Indeed, without agreement on architectural fundamentalsâthe key internal interfaces and invariants in a systemâdivision of effort can be a risky step. â¢ Innovate and synchronize.â Bring the parallel teams together, whether the task is a compilation or a component delivery and interface integra- tion. Then stabilize, usually through some testing and usage period. â¢ Encourage coarse-grain reuse.â There is a lot of focus on very fine- grain reuse, which tends to involve details about interfaces and depen- dencies; there is also significant coarse-grain opportunity to bring together both legacy systems and new systems. A coarse-grain approach makes possible the accommodation of systems at different levels of maturity.
SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE Examples of success in coarse-grain reuse are major system frameworks (such as e-commerce frameworks), service-based architectures, and lay- ered architectures. â¢ Automate.â Automation is needed in the build process, in testing, and in metrics. Uncertainty is inherent in the development of software-intensive systems and must be reassessed constantly, because there are always unknowns and incomplete information. Waiting for complete information is costly, and it can take significant time to acquire the informationâif it is possible to acquire it at all. Schedules and budgets are always limited and almost never sufficient. The goal, it was argued, should be to work effec- tively and efficiently within the resources that are available and discharge risks in an order that is appropriate to the goals of the system and the nature of its operating environment: Establish the baseline design, apply systematic risk management, and then apply opportunity management, constantly evaluating the steps needed and making decisions about how to implement them. Thus, it was suggested that appropriate incentives and analogous penalty mechanisms at the individual level and at the organization or supplier level can change behavior quickly. The goal is thus for the incentive structure to create an opportunity to achieve very efficient balance through a âself-managing organization.â In a self-man- aging organization, it was suggested, the leader has the vision and is an evangelist rather than a micromanager, allowing others to manage using systematic incentive structures. Some ways to enable software technology and business practices for large-scale systems were suggested: â¢ Creating strategies, architectures, and techniques for the devel- opment and management of systems and software, taking into account multiple customers and markets and a broad spectrum of projects, from small scale through large. â¢ Disseminating validated experiences and data for best practices and the circumstances when they apply (for example, titles like âCase Studies of Model Projectsâ). â¢ Aligning big V waterfall-like systems engineering life-cycle models with incremental/spiral software engineering life-cycle models. â The V model is a V-shaped, graphical representation of the systems development life cycle that defines the results to be achieved in a project, describes approaches for developing these results, and links early stages (on the left side of the V) with evaluation and outcomes (on the right side). For example, requirements link to operational testing and detailed design links to unit testing.
SUMMARY OF WORKSHOP DISCUSSIONS â¢ Facilitating objective interoperability mechanisms and benchmarks for enabling information exchange. â¢ Lowering entry barriers for research groups and nontraditional suppliers to participate in large-scale system projects (Grand Challenges, etc.). â¢ Encouraging advanced degree programs in systems and software engineering. â¢ Defining research and technology roadmaps for systems and soft- ware engineering. â¢ Collaborating with foreign software developers. Process, Architecture, and Very Large-Scale Systems Remarks during this portion of the session were aimed at thinking outside the box about what the state of the art in architectures might look like in the future for very large-scale, complex systems that exhibit unpre- dictable behavior. The primary context under discussion was large-scale commercial aircraft developmentâthe Boeing 777 has a few million lines of code, for example, and the new 787 has several million and climbing. It was argued that very large-scale, highly complex systems and fami- lies of systems require new thinking, new approaches, and new processes to architect, design, build, acquire, and operate. It was noted that these new systems are going from millions of lines of code to tens of millions of lines of code (perhaps in 10 years to billions of lines of code and beyond); from hundreds of platforms (servers) to thousands, all interconnected by heterogeneous networks; from hundreds of vendors (and subcontractors) to thousands, all providing code; and from a well-defined user com- munity to dynamic communities of interdependent users in changing environments. It was suggested that the issue for the futureâ10 or 20 years from nowâis how to deal with the potential billion lines of code and tens of thousands of vendors in the very diverse, open-architecture- environment global products of the future, assembled from around the world. According to the forward-looking vision presented by speakers, these systems may have the following characteristics: â¢ Very large-scale systems would integrate multiple systems, each of them autonomous, having distinctive characteristics, and performing its own functions independently to meet distinct objectives. â¢ Each system would have some degree of intelligence, with the objectives of enabling it to modify its relationship to other component sys- tems (in terms of functionality) and allowing it to respond to changes, per- haps unforeseen, in the environment. When multiple systems are joined
10 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE together, the significant emergent capabilities of the resulting system as a whole would enable common goals and objectives. â¢ Each very large-scale system would share information among the various systems in order to address more complex problems. â¢ As more systems are integrated into a very large-scale system, the channels connecting all systems through which information flows would become more robust and continue to grow and expand throughout the life cycle of the very large-scale system. It was argued that a key benefit of a very large-scale system is the interoperability between operational systems that allows decision mak- ers to make better, more informed decisions more quickly and accurately. From a strategic perspective, a very large-scale system is an environment where operational systems have the ability to share information among geographically distributed systems with appropriate security and to act on the shared information to achieve their desired business goals and objectives. From an operational perspective, a very large-scale system is an environment where each operational subsystem performs its own functions autonomously, consistent with the overall strategy of the very large-scale system. The notion of continuous builds or continuous integration was also discussed. Software approaches that depend on continuous integrationâ that is, where changes are integrated very frequentlyârequire processes for change management and integration management. These processes are incremental and build continuously from the bottom up to support evolution and integration, instead of from the top down, using a plan- driven, structured design. They separate data and functions for faster updates and better security. To implement these processes, decentralized organizations and an evolving concept of operations are required to adapt quickly to changing environments. The overall architectural framework for large-scale systems described by some participants in this session consists of five elements: â¢ Governance.â These describe the rules, regulations, and change man- agement that control the total system. â¢ Operational.â These describe how each operational system can be assembled from preexisting or new components (building blocks) to oper- ate in their own new environment so they can adapt to change. â¢ Interaction.â These describe the communication (information pipe- line) and interaction between operational systems that may affect the very large system and how the very large system will react to the inputs from the operational systems.
SUMMARY OF WORKSHOP DISCUSSIONS 11 â¢ Integration and change management.â These describe the processes for managing change and the integration of systems that enable emergent capabilities. â¢ Technical.â These depict the technology components that are neces- sary to support these systems. It was suggested that large-scale systems of that future that will cope with scale and uncertainty would be built from the bottom up by start- ing with autonomous building blocks to enable the rapid assembly and integration of these components to effectively evolve the very large-scale system. The architectural framework would ensure that each building block would be aligned to the total system. Building blocks would be assembled by analyzing a problem domain through the lens of an opera- tional environment or mission for the purpose of creating the character- istics and functionality that would satisfy the stakeholdersâ requirements. In this mission-focused approach, all stakeholders and modes of opera- tions should be clearly identified; different user viewpoints and needs should be gathered for the proposed system; and stakeholders must state which needs are essential, which are desirable, and which are optional. Prioritization of stakeholdersâ needs is the basis for the development of such systems; vague and conflicting needs, wants, and opinions should be clarified and resolved; and consensus should be built before assembling the system. At the operational level, the system would be separated from current rigid organization structures (people, processes, technology) and would evolve into a dynamic concept of operation by assembling separate build- ing blocks to meet operational goals. The system manager should ask: What problem will the system solve? What is the proposed system used for? Should the existing system be improved and updated by adding more functionality or should it be replaced? What is the business case? To realize this future, participants suggested that research is needed in several areas, including these: â¢ Governance (rules and regulations for evolving systems). â¢ Interaction and communication among systems (including the pos- sibility of negative interactions between individual components and the integrity, security, and functioning of the rest of the system). â¢ Integration and change management. â¢ Userâs perspective and user-controlled evolution. â¢ Technologies supporting evolution. â¢ Management and acquisition processes. â¢ An architectural structure that enables emergence.
12 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE â¢ Processes for decentralized organizations structured to meet opera- tional goals. SESSIOn 2: DoD Software challenges for future systems Panelists: Kristen Baldwin, Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, and Patrick Lardieri, L Â ockheed Martin Moderator: Douglas Schmidt Panelist presentations and general discussions during this session were intended to explore two questions, from two perspectives: that of the government and that of the government contractor: â¢ How are challenges for software in DoD systems, particularly cyber-physical systems, being met in the current environment? â¢ What advancements in R&D, technology, and practices are needed to achieve success as demands on software-intensive system development capability increase, particularly with respect to scale, complexity, and the increasingly rapid evolution in requirements (and threats)? DoD Software Engineering and System Assurance An overview of various activities relating to DoD software engineering was given. Highlights from the presentation and discussion follow. The recent Acquisition & Technology reorganization is aimed at positioning systems engineering within the DoD, consistent with a renewed emphasis on software. The director of Systems and Software Engineering now reports directly to the Under Secretary of Defense for Acquisition and Technology. The mission of Systems and Software Engineering, which addresses evolv- ing systemâand softwareâengineering challenges, is as follows: â¢ Shape acquisition solutions and promote early technical planning. â¢ Promote the application of sound systems and software engineer- ing, developmental test and evaluation, operational test and evaluation to determine operational suitability and effectiveness, and related technical disciplines across DoDâs acquisition community and programs. â¢ Raise awareness of the importance of effective systems engineering and raise program planning and execution to the state of the practice. â¢ Establish policy, guidance, best practices, education, and train- ing in collaboration with the academic, industrial, and government communities.
SUMMARY OF WORKSHOP DISCUSSIONS 13 â¢ Provide technical insight to program managers and leadership to support decision making. DoDâs Software Center of Excellence is made up of a community of participants including industry, DoD-wide partnerships, national part- nerships, and university and international alliances. It will focus on sup- porting acquisition; improving the state of the practice of software engi- neering; providing leadership, outreach, and advocacy for the systems engineering communities; and fostering resources that can meet DoD goals. These are elements of DoDâs strategy for software, which aims to promote world-class leadership for DoD software engineering. Findings from some 40 recent program reviews were discussed. These reviews identified seven systemic issues and issue clusters that had con- tributed to DoDâs poor execution of its software program, which were highlighted in the session discussion. The first issue is that software requirements are not well defined, traceable, and testable. A second issue cluster involves immature architectures; integration of commercial-off- the-shelf (COTS) products; interoperability; and obsolescence (the need to refresh electronics and hardware). The third cluster involves software development processes that are not institutionalized, have missing or incomplete planning documents, and inconsistent reuse strategies. A fourth issue is software testing and evaluation that lacks rigor and breadth. The fifth issue is lack of realism in compressed or overlapping schedules. The sixth issue is that lessons learned are not incorporated into successive buildsâthey are not cumulative. Finally, software risks and metrics are not well defined or well managed. To address these issues, DoD is pursuing an approach that includes the following elements: â¢ Identification of software issues and needs through a software industrial base assessment, a National Defense Industrial Association (NDIA) workshop on top software issues, and a defense software strategy summit. The industrial base assessment, performed by CSIS, found that the lack of comprehensive, accurate, timely, and comparable data about software projects within DoD limits the ability to undertake any bottom-up analysis or enterprise-wide assessments about the demand for software. Although the CSIS analy- sis suggests that the overall pool of software developers is adequate, the CSIS assessment found an imbalance in the supply of and demand for the specialized, upper echelons of software developer/management cadres. These senior cadres can be grown, but it takes time (10 or more years) and â Center for Strategic and International Studies (CSIS), Defense-Industrial Initiatives Group, 2006. Software Industrial Base Assessment: Phase I Report, October 4.
14 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE a concerted strategy. In the meantime, management/architecture/systems engineering tools might help improve the effectiveness of the senior cadres. Defense business system/COTS software modification also places stress on limited pools of key technical and management Âtalent. Moreover, the true cost and risk of software maintenance deferral are not fully understood. â¢ Creation of opportunities and partnerships through an established net- work of government software points of contact; chartering the NDIA Software Committee; information exchanges with government, academia, and industry, and planning a systems and software technology conference. Top issues emerg- ing from the NDIA Defense Software Strategy Summit in October 2006 included establishment and management of software requirements, the lack of a software voice in key system decisions, inadequate life-cycle planning and management, the high cost and ineffectiveness of traditional software verification methods, the dearth of software management exper- tise, inadequate technology and methods for assurance, and the need for better techniques for COTS assessment and integration. â¢ Execution of focused initiatives such as Capability Maturity Model Inte- gration (CMMI) support for integrity and acquisition, a CMMI guidebook, a handbook on engineering for system assurance, a systems engineering guide for systems of systems (SoSs), the provision of software support to acquisition programs, and a vision for acquisition reform. SoSs to be used for defense require special considerations for scale (a single integrated architecture is not feasible), ownership and management (individual systems may have different owners), legacy (given budget considerations, current systems will be around for a long time), changing operations (SoS configurations will face changing and unpredictable operational demands), criticality (systems are integrated via software), and role of the network (SoSs will be network-based, but budget and legacy challenges may make imple- mentation uneven). To address a complex SoS, an initial (incremental) version of the DoDâs SoS systems engineering guide is being piloted; future versions will address enterprise and net-centric considerations, management, testing, and sustaining engineering. The issue of system assuranceâreducing the vulnerability of systems to malicious tampering or accessâwas noted as a fundamental consid- eration, to the point that cybertrust considerations can be a fundamental driver of requirements, architecture and design, implementation practice, and quality assurance. Because current assurance, safety, and protection â separate National Research Council study committee is exploring the issue of cyberse- A curity research and development broadly, and its report, Toward a Safer and More Secure Cyber- space, will be published in final form in late 2007. See http://cstb.org/project_cybersecurity for more information.
SUMMARY OF WORKSHOP DISCUSSIONS 15 initiatives are not aligned, a comprehensive strategy for system assurance initiatives is being developed, including standards activities and guidance to put new methods into practice. One additional challenge for DoD is that, given its shortage of soft- ware resources and critical dependence on software, it cannot afford to have stovepipes in its community. To that end, the DoD Software Center of Excellence is intended to be a focal point for the community. Areas to be explored include, for example, agile methods, software estimation (a harder problem to address for unprecedented systems than for prec- edented ones), and software testing. Challenges in Developing DoD Cyber-physical Systems This session explored challenges in building cyber-physical systems for DoDâsystems that integrate physical processes and computer pro- cesses in a real-time distributed fashionâfrom the perspective of a large contractor responsible for a wide range of systems and IT services. Cyber-physical systems are increasingly systems- and software-inten- siveâfor example, in 1960, only 8 percent of the F-4 fighter capability was provided by software; in 2000, 85 percent of the F-22 fighter capability was provided by software. Such systems are distributed, real-time systems with millions of lines of code, driven by multiple sensors reporting at a variety of timescales and by multiple weapon system and machinery control protocols. Current examples of cyber-physical systems include the Joint Strike Fighter (JSF) and Future Combat Systems (FCS); examples of forthcoming technologies would be teams of autonomous robots or teams of small, fast surface ships. Characteristics of these systems exemplify the challenges of uncertainty and scale; they include â¢ Large scaleâtens of thousands of functional and performance requirements, 10 million lines of code, and 100 to 1,000 software configu- ration items; â¢ Simultaneous conflicting performance requirementsâreal-time processing, bounded failure recovery, security; â¢ Implementation diversityâprogramming languages, operating systems, middleware, complex deployment architectures, 20-40 year sys- tem life cycles, stringent certification standards; and â¢ Complex deployment architecturesâsystems of systems; mixed wired, wireless and satellite networks; multi-tiered servers; personal digital assistants (PDAs) and workstations; and multiple system configurations. These systems are challenging, complex, and costly. Accordingly, sys- tem design challenges are frequently simplified by deferring or eliminat-
16 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ing capability to bound costs and delivery dates. Nevertheless, cost over- runs and schedule delays are common. The GAO reported that in fiscal year 2003 (FY03) the DoD spent $21 billion on research, development, testing, and evaluation (RDT&E) for new warfighting systems; about 40 percent of that may have been spent on reworking software to remedy quality-related issues. For the F/A-22, the GAO reported that Air Force officials do not understand avionics software instability well enough to predict when they will be able to resolve its problems. Because of the complex interrelationships between parts of these cyber-physical systems and the high degree of interactive complexity, piecewise deployment of partial systems is not helpful. An example given was a situation regarding the JSF, where changing an instruction memory layout to accommodate built-in test processing unexpectedly damaged the systemâs ability to meet timing requirements. It was suggested that as a result of experiences such as the one with the Aegis Combat System, where Aegis Baseline 6, Phase I, deployment was delayed for months because of integration prob- lems between two independently designed cyber-physical systems, certi- â Government Accountability Office (GAO), 2004, âDefense acquisitions: Stronger man- agement practices are needed to improve DODâs software-intensive weapon acquisitions,â Report to the Committee on Armed Services, U.S. Senate, GAO-04-393, March. â Government Accountability Office (GAO), 2003, âTactical aircraft: Status of the F/A-22,â Statement of Alan Li, Director, Acquisition and Sourcing Management, Testimony Before the Subcommittee on Tactical Air and Land Forces, Committee on Armed Services, House of Representatives (GAO-03-603T), February. See also: Government Accountability Office (GAO), 2005, âTactical aircraft: F/A-22 and JSF acquisition plans and implications for tactical aircraft modernization,â Statement of Michael Sullivan, Director, Acquisition and Sourcing Management Issues, Testimony Before the Subcommittee on AirLand, Committee on Armed Services, U.S. Senate (GAO-05-519T), April 6, which concluded as follows: The original business case elementsâneeds and resourcesâset at the outset of the program are no longer valid, and a new business case is needed to justify future investments for aircraft quantities and modernization efforts. The F/A-22âs acquisition approach was not knowledge-based or evolutionary. It attempted to develop revolutionary capability in a single step, causing significant technology and design uncertainties and, eventually, significant cost overruns and schedule delays; and Government Accountability Office (GAO), 2007, âTactical aircraft: DOD needs a joint and integrated investment strategy,â Report to the Chairman, Subcommittee on Air and Land Forces, Committee on Armed Services, House of Representatives (GAO-07-415), April, which concluded as follows: We have previously recommended that DOD develop a new business case for the F-22A program before further investments in new aircraft or modernization are made. DOD has not concurred with this recommendation, stating that an internal study of tactical aircraft has justified the current quantities planned for the F-22A. Because of the frequently changing OSD-approved requirements for the F-22A, repeated cost overruns, significant remaining in- vestments, and delays in the program we continue to believe a new business case is required and that the assumptions used in the internal OSD study be validated by an independent source.
SUMMARY OF WORKSHOP DISCUSSIONS 17 fication communities have become extremely conservative and require a static configuration for certification. Despite softwareâs centrality and criticality in DoD cyber-physical systems and in warfighting in general, participants suggested that it is underemphasized in high-level management reviews. For example, the Quadrennial Defense Review calls for more complex systems for advanced warfighting capabilities but mentions software only twice. Some inherent scientific and research challenges underlying engineer- ing and engineering management of DoD cyber-physical systems cited by workshop participants include these: â¢ The management of knowledge fragmentationâfragmentation among people and teams, geographic areas, organizations, and temporal boundaries; â¢ Design challengesâthe many problems that cannot be clearly defined without specifying the solution and for which every solution is a one-time-only operation (these are sometime referred to as âwicked problemsâ); and â¢ Team collaboration complexityâthousands of requirements, huge teams (hundreds or thousands of engineers), with frequent turnover and highly variable ranges of skill. With respect to knowledge fragmentation (that is, knowledge split across individual minds, knowledge split across different phases of the development cycle and the life cycle, knowledge split across different artifacts, and knowledge split across various components of an organiza- tion), system engineering today is a concurrent top-down process. There is ad hoc coordination among engineers (domain engineers, system engi- neers, software engineers) at different levels and loose semantic coupling between design and specifications. There are some problems where it is difficult to say what to do without specifying how and thereby commit- ting to an implementation; participants noted that current tools do not generally help manage the tremendous interdependence between the specification of the problem and the realization of a solution. Solutions are not necessarily right or wrong, and designers have to iterate rapidly, switching repeatedly between thinking about problem and solution con- cerns, along the lines of Fred Brooksâ description of throw-away protoÂ typing. The process is slow, it is error prone because interaction is ad hoc, â Note that the previous discussion noted the desirability of not having a static configura- tion in early stages of development. â Frederick P. Brooks, 1995, The Mythical Man-Month: Essays in Software Engineering, A Â ddison-Wesley Professional, New York.
18 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE it uses imprecise English prose, and automated checking is relegated to the lowest level where formal specifications exist. Matters become even worse as the program or system grows in size and complexity. Large teams managing complex systems must grapple with the issues of large scale in a complex collaborative environment. Interactive com- plexity has two dimensions: coupling (tight or loose) and interactions (complex or linear). Systems with high interactive complexityâfor exam- ple, nuclear power plants and chemical plantsâpossess numerous hid- den interactions that can lead to systems accidents and hazards. Interac- tive complexity can complicate reuse. Well-known cyber-physical system accidents cited by participants included the Ariane 5, which reportedly reused a module developed for Ariane 4. That module assumed that the horizontal velocity component would not overflow a 16-bit variable. This was true for Ariane 4 but not for Ariane 5, leading to self-destruction roughly 40 seconds after the launch.10 Cyber-physical systems typically have high interactive complexity. New systems have more resource sharing, which leads to hidden depen- dencies. There is limited design time support to understand or reduce interactive complexity. Modeling and analytic techniques are difficult to employ and often are underutilized. Simulations may not capture the system that is actually built; diagrams are not sufficient to convey all consequences of decisions. Thus, present cyber-physical systems rely on human ingenuity at design time and extensive system testing to manage interactive complexity. They also rely on particular knowledge of and experience with specific vendor-sourced components in the âtechnology stack.â For this reason, the structure of the stack tends to resist change, impeding architectural progress and increased complexity in these sys- tems. The resulting long and costly development efforts are expected to run into system accidents. Elements of a research agenda for cyber-physical systems that per- form predictably were discussed. One goal of such research would be to find ways to manage the uncertainty that arises from the highly-coupled nature and interactive complexity of system design at very large scale. Two areas were focused on during discussions at this session: â¢ Platform technology.â One example of a platform technology would be generation of custom run-time infrastructures. Current run-time infra- structure is deployed in general-purpose layers that are not designed for specific applications. It is a significant challenge to configure controls 10â J.C. Lyons, 1996, âAriane 5 Flight 501 failure: Report by the inquiry board,â Paris, July 19. Downloaded from http://www.ima.umn.edu/~arnold/disasters/ariane5rep.html on March 15, 2007.
SUMMARY OF WORKSHOP DISCUSSIONS 19 across the layers to achieve performance requirements; analysis is difficult because of many hidden dependencies and because of complex inter- faces and capabilities. The generation of custom run-time infrastructure (e.g., WebSprocket) reduces system complexity. Another such technology would be certifiable dynamic resource management services. Current certification processes are based on extensive analysis and testing (hun- dreds of man-years) of fixed system configurations. Furthermore, these are human-intensive evaluation processes with limited technological sup- port that occur over the design, development, and production lifecycle. There is no way to achieve the same level of assurance for untested system configurations that may be generated by an adaptive system in the run- time environmentâand these are the kinds of systems that are likely to be deployed in the future. â¢ System design tools.â Model-centric system design would allow eval- uation of the design before final implementation by developing protoÂ typing systems and using forms of static verification. Domain-specific modeling languages enable unambiguous system specifications. Model generation tools could be used to make models the center of the devel- opment process, synchronized with software artifacts. Tools that enable automated characterization of the behavior of third-party and COTS applications would be helpful. And, program transformation tools could be used to make the legacy code base and COTS software compatible with new platforms. In addition to platform technologies and design tools, cultural ele- ments are also needed to address the challenges of cyber-physical system development. Speakers noted some aspects of these elements: â¢ Independent, neutral-party benchmarking and evaluationâspeak- ers believed that there is currently insufficient funding for this type of work. â¢ Development challenges that are realistic and at scale, allowing credible evaluation of technology solutions (measure technologies, not just artifacts). â¢ System design education as part of the undergraduate curriculum. SESSION 3: AGILITY at Scale Panelists: Kent Beck and Cynthia Andres, Three Rivers Institute Moderator: Douglas Schmidt This session addressed the application and applicability of extreme programmingâs âagile techniquesâ to very large, complex systems from
20 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE the perspectives of technology, development practices, and social psy- chology. For this session, both speakers and workshop participants were asked the following question: â¢ How can the engineering and management values that the âagileâ community has identified be achieved for larger-scale systems and for systems that must evolve in response to rapidly changing requirements? Values and Sponsorship Participants noted that extreme programming (XP)âan agile software development methodologyâwas one of the first methodologies to be explicit about the value system behind its approach and about what is fun- damental to this perspective on software development.11 Different develop- ment approaches have their own underlying values. Speakers argued that over the 10 years or so since the coining of extreme programming, the key to success seems to be sponsorshipâsenior-level commitment to adopt- ing XP ideas within an organization. Trying extreme programming can be disruptive, stirring up internal tension and controversy. Effective sponsors advocate among their peers and mitigate these effects. Senior-level sponsors also can help acquire resources to foster teamwork and communication. When trying extreme programming, it was suggested that people tend to focus initially on the more visible and explicit changes to prac- tices, such as pair programming, weekly releases, sitting together in open rooms, or using a test first approach. If a fundamental value shift is taking place, practices will change accordingly. However, under pressure, people tend to revert back to their old ways. Without support at higher levels for changes in approach and underlying values and without sustaining that support through periods of organizational discomfort during the transi- tion, simply trying to put new or different practices in place is not very effective. One speaker noted that senior-level commitment and sponsor- ship are therefore key to changing values and conveying these changes to larger groups and organizations. Human Issues in Software Development One speaker noted that many of the challenges in software develop- ment are human issues: People are the developers and people write the 11â For a brief summary of some of the underlying values in agile software development, see âThe Agile Manifesto,â available online at http://www.agilemanifesto.org/. These values include a focus on individuals over process, working software over documentation, and responsiveness to change over following a particular plan.
SUMMARY OF WORKSHOP DISCUSSIONS 21 software. Limitations to what can be done with software are often limita- tions of human imagination and how much complexity can be managed in one personâs mind. Innovation requires fresh ideas, and if all parties are thinking similarly, not as many ideas are generated. Many problems have multiple solutionsâa key is to sort out which solutions are sufficient and doable. It was suggested that one way to promote innovation is to encourage diversity: Small projects with diverse groups can be effective in that fostering interaction and coordination across disciplines often results in a stronger, richer set of ideas to choose from. It was suggested that having interesting problems to work on is a nonmonetary motivator for many software and computer science practi- tioners. A good example of nonmonetary incentives is open source tech- nology. Participants also noted that marketing innovation, intellectual curiosity, and creativity as an organizational goals are important. The perception or image of the work can be crucial to attracting new hires, who may know that the organizationâs work involves a lot of processes, requires great care, and takes a long time, but not necessarily that it is also interesting and creative work. Trust, Communication, and Risk Speakers argued that much of the effort in extreme programming comes down to finding better ways of building trust. Examples were given of ways to begin conversations and to putpeople in contact with one another in order to establish trust. These include the techniques of Appreciative Inquiry (talking about what works), World CafÃ© (acquire the collective knowledge, insight, and synergies of a group in a fairly short time), and Open Space (people talk about the concerns that they have and the issues that matter to them in breakout sessions whose highlights are reported to the rest of the group).12 Some of the technical aspects of extreme programming are useful ways for programmers to demonstrate their trustworthiness. It was suggested that enhancing communicationâ in part, by using these communication techniquesâcould be useful to DoD. Tools to make developersâ testing activities more visible, not just to themselves but also to teammates, contribute to accountability and trans- parency in development and increase communication as well. There are technical things that can be done to reduce risk in projects. Some risk-reduction principles persist throughout all of extreme program- ming (XP). However, the bulk of the XP experience is not at the scale that 12â See the Appreciative Inquiry Commons (http://appreciativeinquiry.case.edu/), the World CafÃ© (http://www.theworldcafe.com/), and Open Space (http://www.openspaceworld. org/).
22 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE DoD systems manifest. Therefore, one issue raised at the meeting was whether and how the agile programming/extreme programming experi- ence can scale. Three risk reduction techniques were mentioned: â¢ Reduce the amount of work that is half done.â Half-done work is an inherent risk. The feedback cycle has not been closed. No value has yet been received from the effort that has been expended; the mix of done and undone work occupies and distracts people. By gradually reducing the inventory of half-done work, a project can be made to run more smoothly with lower overall risk. â¢ Find ways to defer the specification of requirement detail.â If a project experiences requirement churn, the chances are that too much detail has been specified too soon. There is a clear case to be made for much more carefully specifying the goals of a project up front but not the means for accomplishing those goals. â¢ Testing sooner.â The longer a bug lives, the more expensive it becomes. One way of addressing that situation and improving the overall effectiveness of development is finding ways to validate software sooner, such as by developer testing. Integration is part of that testing. Several research topics were discussed at this session: â¢ Techniques for communication.â Examine how teams actually com- municate and how they could communicate more effectively.13 â¢ Encouragement of multidisciplinary work and collaboration. â¢ Learning how to value simplicity.â In complex systems, fewer components in the architecture means fewer possible unpredictable interactions. â¢ Empirical research in software.â One example of the results of such research was notedânamely, the appearance of the power law distribu- tions for object usage in software. That is, many objects are only used once, some are used multiple times, and very few are used very fre- quently. Exploring the implications of this and other phenomena may provide insight into development methodologies and how to manage complexity and scale. â¢ Testing and integration techniques.â In a complicated deployment environment, finding better ways to get more assurance sooner in the 13â One example is some research under way at the University of Sheffield. Psychologists watch teams using XP methodologies and then report on the psychological effects of using XP, as opposed to other metrics such as defect rates. Results suggest that people are Âhappier doing things this way. See http://www.shef.ac.uk/dcs/research/groups/vt/research/Â observatory.html for more information.
SUMMARY OF WORKSHOP DISCUSSIONS 23 cycle and more frequently should improve software development as a whole. Session 4: Quality and assurance with scale and uncertainty Panelists: Joe Jarzombek, Department of Homeland Security; KrisÂ Britton, National Security Agency; Mary Ann Davidson, OracleÂ Corporation; Gary McGraw, Cigital Moderator: William Scherlis Panelist presentations and general discussions in this session were intended to address the following questions, from government and indus- try perspectives: â¢ What are the particular challenges for achieving particular assur- ances for software quality and cybersecurity attributes as scale and inter- connection increase? â¢ What are emerging best practices and technologies? â¢ What kinds of new technologies and practices could assist? This includes especially interventions that can be made as part of the devel- opment process rather than after the fact. Interventions could include practices, processes, tools, and so on. â¢ How should cost-effectiveness be assessed? â¢ What are the prospects for certification, both at a comprehensive level and with respect to particular critical quality attributes? The presentations began by describing the goals and activities of two federal programs in software assurance and went on to explore present and future approaches. Software Assurance Considerations and the DHS Software Assurance Program The U.S. Department of Homeland Security (DHS) has a strategic pro- gram (discussed in more detail later in this section) to promote integrity, security, and reliability in software.14 This program for software assur- 14â The definition of software assurance that DHS uses comes out of the Committee on National Security Systems: namely, it is the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner. More generally, âassuranceâ is about confidenceâthat is, it is a human judgment, not an objective test/verification/analytic resultÂ but rather a judgment based on those results.
24 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ance emphasizes security; the risk exposures associated with reliance on software leave a lot of room for improvements. In industry as well as government there is increased concern about security. Security is difficult to measure. It is difficult to quantify security or assess relative progress in improving it. Participants noted the need for more comprehensive diagnostic capabilities and standards on which to base assurance claims. Two suggestions were made: â¢ The software assurance tool industry has not been keeping pace with changes in software systemsâtools that provide point solutions are available, but much of the software industry cannot apply them. As testing processes become more complex, costly, and time consuming, the testing focus frequently narrows to functional testing. â¢ Tools are not interoperable. This leads to more standards but, para- doxically, less standardization. Less standardization, in turn, leads to decreased confidence and lower levels of assurance. One remedy for this situation would have the following elements: â¢ Government, in collaboration with industry and academia, works to raise expectations on product assurance. This would help to advance more comprehensive diagnostic capabilities, methodologies, and tools to mitigate risks. â¢ Acquisition managers and users start to factor information about suppliersâ software development processes and the risks posed by the supply chain into their decision making and acquisition/delivery pro- cesses. Information about evaluated products would become available, and products in use could be securely configured. â¢ Suppliers begin to deliver quality products with requisite integ- rity and make assurance claims about their IT and the softwareâs safety, security, and dependability. To do this, they would need to have and use relevant standards, qualified tools, independent third-party certifiers, and a qualified workforce. It was suggested that software is an industry that demands only minimal levels of responsible practice compared to some other industries and that this is part of the challenge. But raising the level of responsible practice could increase sales to customers that demand high-assurance products. From the perspective of the DHS, critical infrastructure around the United States is often not owned or operated by U.S. interests. As cyber- space and physical space become increasingly intertwined and software- controlled or -enabled, these interconnections and controls are often
SUMMARY OF WORKSHOP DISCUSSIONS 25 implemented using the Internet. This presents a target-rich environment especially given the asymmetries at work: According to one speaker, extrapolating from data on average defect rates, a deployed software package of a million lines of code will have 6,000 defects. Even if only 1 percent of those defects introduce security-related vulnerabilities, then there are 60 different vulnerabilities for an adversary to exploit. In an era riddled with asymmetric cyberattacks, claims about system reliability, integrity, and safety must also address the built-in security of the enabling software. Security is an enabler for reliability, integrity, and safety. Cyber- related disruptions have an economic and business impact because they can lead to the loss of money and time, delayed or cancelled products, and loss of sensitive information, reputation, even life. From a CEO/CIO perspective, disruptions and security flaws can mean having to redeploy staff to deal with problems and increase IT security, reduced end user productivity, delayed products, and unanticipated patch management costs. Results from a survey of CIOs in 2006 by the CIO Executive Council indicate that reliable and vulnerability-free software is a top priority. In that same survey respondents expressed âlow to medium confidenceâ that software is âfree fromâ flaws, vulnerabilities, and malicious code. The majority of these CIOs would like vendors to certify and test software using qualified tools. Speakers noted that the second national software summit had iden- tified major gaps in requirements for tools and technologies, as well as major shortcomings in the state-of-the-art and the state-of-the practice for developing error-free software. A national software strategy was recom- mended in order to enhance the nationâs capability to routinely develop trustworthy software products and ensure the continued competitive- ness of the U.S. software industry. This strategy focused on improving software trustworthiness, educating and fielding the software workforce, re-energizing software R&D, and encouraging innovation in the U.S. industry.15 In addition to the gaps and shortcomings identified at that software summit, a recent PITAC report on national priorities for cyber- security listed security software engineering and software assurance as among the top ten goals.16 Software assurance contributes to trustworthy software systems. The goals of the DHS Software Assurance (SwA) program promote the secu- rity of software across its development, acquisition, and implementation 15â Center for National Software Studies, 2005, âSoftware 2015: A national software strat- egy to ensure U.S. security and competitiveness,â available online at www.cnsoftware. org/nss2report, April 29. 16â Presidentâs Information Technology Advisory Committee (PITAC), 2005, Cybersecurity: A Crisis of Prioritization, February.
26 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE life cycles.17 The SwA program is scoped to address trustworthiness, predictable execution, and conformance to requirements, standards, and procedures. It is structured to target people, process, technology, and acquisition. The SwA program is process-agnostic, providing practical guidance in assurance practices and methodologies for process improvement. A developerâs guide and glossary discussed during this session, Securing the Software Life Cycle, is not a policy or standard. Instead, it focuses on touch points and artifacts throughout the life cycle that are foundational knowledge, best practices, and tools and resources for building assurance in. Integrating security into the systems engineering life cycle enables the implementation of software assurance. It was suggested that software assurance would be well-served by standards that assign names to practices or collections of practices. Standards are needed to facilitate communication between buyer and seller, government and industry, insurer and insured. They are needed to improve information exchange and interoperability among practices and among tools. The goal is to close the gap between art and practice and raise the minimum level of responsible practice. Some current standards efforts for software and system life cycle processes include ISO SC7, ISO SC22, ISO SC27, and IEEE S2ESC. A critical aspect, it was suggested, is language: articulating structured assurance claims supported by evidence and reasoning. For example, the Object Management Group (OMG) has been working with industry and federal agencies to help collaboration in a common framework for the analysis and exchange of information related to software trustworthiness. This framework can be used for building and assembling software components, including legacy systems and large systems and networks: Looking only at product evaluation overlooks the places where systems and networks are most vulnerable, because it is the interaction of all the components as installed that becomes the problem. One of the challenges often noted regarding standardization of prac- tices is the lag between identification of a best practice and its codification into a standard. This is particularly challenging in areas such as software assurance, where there is rapid evolution of technologies, practices, and related standards. In the future, the goal is for customers to have expec- tations for product assuranceâincluding information about evaluated products, suppliersâ process capabilities, and secure configurations of softwareâand for suppliers to be able to distinguish themselves by deliv- ering quality products with the requisite integrity and to be able to make assurance claims based on relevant standards. 17â The MITRE Web site, http://www.cwe.mitre.org, can be used to track SwA progress.
SUMMARY OF WORKSHOP DISCUSSIONS 27 The National Security Agency Center for Assured Software According to the historical perspective offered by one speaker, the DoD assurance requirements of 30 years ago mostly focused on what became the National Information Assurance Partnership (NIAP) and the trusted product evaluation program.18 Developers were known and trusted intrinsically. By contrast, in todayâs environment, the market for software is a global one: Even U.S. companies are international. DoD has become increasingly concerned about malicious intent. Malicious code done very well is going to look like an accident. Unfortunately, it was argued, assurance is gained today the same way as it was 30 years ago. Various mechanisms for gaining confidence in general-purpose software are also being used for DoD software: functional testing, penetration test- ing, design and implementation analysis, advanced development envi- ronments, trusted developers, process, discipline, and so forth are mecha- nisms to build confidence. The intention is for the Center for Assured Software to contribute to the advancement of measurable confidence in software. In todayâs environment, vendors do not have an incentive to be involved early in the design process, so testing typically is done after the fact, with a third-party orientation. The problem with this model is that it is all about penetration analysis, not building security in, and trust is bestowed by a third party. Moreover, this model does not scale very well. In one speakerâs view, assurance models for COMSEC devices will not, for example, scale to the DoDâs Joint Tactical Radio System (JTRS) pro- gram. In addition, composition has always been a problem in the context of assurance: The current state of knowledge about how to compose sys- tems well and to know what we have, with the inadequacy in assurance, is compounded by the problem of malicious intent. A challenge for NSAâs Center for Assured Software is to be able to scale assurance. To do that, the future assurance paradigm needs to acknowledge the role of the development process in the assurance argu- ment. How software is built, what processes are used, and what tools are used all have to be part of the assurance argument. That is a subtle but important shift in the paradigm. In the speakerâs view, the way to achieve scale in the development process and the way to gain assurance in the development process and in third-party analysis is by increasing the extent of automation. The cur- 18â NIAP is a U.S. government initiative originated to meet the security testing needs of both information technology consumers and producers and is operated by the NSA (see http://www.niap-ccevs.org/). The Trusted Product Evaluation Program (TPEP) was started in 1983 to evaluate COTS products against the Trusted Computer Systems Evaluation C Â riteria (TCSEC).
28 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE rent paradigm does not embrace that means of achieving scale very well. Previous measurement techniques mostly entailed humans looking for vulnerabilities. What the Center for Assured Software is trying to do is find correlations between assurance and positive things that can be mea- suredâfor instance, the properties that are importantâto give confidence that the software is indeed built appropriately. Another area where work is needed, it was suggested, is to create a science of composition that enables making an argument for levels of assurance at scale. In the mid- 1980s, there were attempts to do that with the Trusted Database Manage- ment System Interpretation of the Trusted Computer System Evaluation Criteria (often referred to as the Orange Book), but the results did not scale very well. Participants mentioned a variety of ideas being pursued in industry and academia in response to business and government needs in the area of software assurance: anomaly identification, model checking, repeatable methodology for assurance analysis and evaluation, and intermediate representation of executable code.19 Suggested research areas mentioned during this discussion include these: â¢ Assurance composition, â¢ Verifiable compilation, â¢ Software annotation, â¢ Model checking, â¢ Safe languages and automated migration from unsafe languages, â¢ Software understanding, and â¢ Measurable attributes that have strong correlation to assurance. More broadly, participants suggested that it will be important to under- stand how to build confidence from all of these (and other approaches) and to improve these approaches. In particular, it will be important to understand how they âcombineâ (that is, what multiple techniques col- lectively convey regarding confidence) since it is at best highly unlikely that one technique will ever by itself be sufficient. Software Assurance: Present and Future This vendor-perspective presentation and discussion focused on COTS (it was suggested that 80 percent of DoD systems have at least some COTS components) and on taking tactical, practical, and economical steps at the component level to improve assurance. As scale and interconnec- 19â Other approaches include static analysis, extended static checkers, and rule-based a Â utomatic code analysis.
SUMMARY OF WORKSHOP DISCUSSIONS 29 tivity increase, it was argued that the assurance bar for software quality and cybersecurity attributes can be raised by (1) raising the component assurance bar (resources are finite and organizations can spend too much time and too many resources trying to patch their way to security) and (2) getting customers to understand and accept that assurance for custom software can be raised if they are willing to pay more (if customers do not know about costs that are hidden, they cannot accept or budget for them). One set of best practices and technologies to write secure software was described. It includes â¢ Secure coding standards, â¢ Developer training in secure coding, â¢ Enabled, embedded security points of contact (the âmissionary modelâ), â¢ Security as part of development including functional, design, test (include threat modeling), â¢ Regressions (including destructive security tests), â¢ Automated tools (home grown, commercial of multiple flavors), â¢ Locked-down configurations (delivering products that are secure on installation), and â¢ Release criteria for security. However, these practices are not routinely taught in universities. Nei- ther the software profession not the industry as a whole can simply rely on a few organizations doing these kinds of things. Discussion identified some necessary changes in the long run: â¢ University curricula.â It was argued that university programs should do a better job of teaching secure coding practices and training future developers to pay attention to security as part of software development. If the mindset of junior developers does not change, the problem will not be solved. One participant said, âProcess wonât fix stupidity or arrogance.â Incentives to be mindful of security should be integrated throughout the curriculum. When security is embedded throughout the development process, a small core of security experts is not sufficient. One challenge is how to balance the university focus on enduring knowledge and skills against the need for developers to understand particular practices and techniques specific to current technologies. â¢ Automation.â Automated tools are promising and will be increas- ingly important, but they are not a cure-all. Automated tools are not yet ready for universal prime time for a number of reasons, including: Tools need to be trained to understand the code base; programmers have dif-
30 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ficulty establishing sound and complete rules; most of todayâs tools look only for anticipated vulnerabilities (e.g., buffer overruns) and cannot be readily adapted to new classes of vulnerabilities; there are often too many false positives; scalability is an issue; one size does not fit all (it is prema- ture for standards) and therefore multiple tools are needed; and there is not a good system for rating tools. Conventional wisdom holds that people will not pay more for secure software. However, people already are paying for bad securityâa 2002 study by the National Institute of Standards and Technology (NIST) reported that the consequences of bad software cost $59 billion a year in the United States.20 It was argued that from a development standpoint, security cost-effectiveness should be measured pragmatically. However, a simple return on investment (ROI) is not the right metric. From a devel- operâs perspective, the goal should be the highest net present value (NPV) for cost avoidance of future bugsânot raw cost savings or the ROI from fixing bugs in the current year. Another way of valuing security is oppor- tunity cost savingsâwhat can be done (e.g., building new features) with the resources saved from not producing patches. From the customerâs perspective, it is the life-cycle cost of applying a patch weighed against the expected cost of the harm from not applying the patch. Customers want predictable costs, and the perception is that they cannot budget for the cost of applying patches (even though the real cost driver is the consequences of unpatched systems). If customers know what they are getting, they can plan for a certain level of risk at a certain cost. The goal is to find the match between expected risk for the customer and for the vendorâhow suitable the product is for its intended use. Certification is a way of assessing what is âgood.â21 But partici- pants were not optimistic when considering prospects for certification of development processes. There is too much disagreement and ideol- ogy surrounding development processes. However, there can be some commonality around aspects of good development processes. Certifying developers is also problematic. In engineering, there are accredited degree programs and clear licensing requirements. The awarding of a degree in computer science is not analogous to licensing an engineer because there is not the same common set of requirements, especially robustness and safety requirements. In addition, it can be difficult to replicate the results 20âSee NIST, 2002, âPlanning Report 02-3: The economic impacts of inadequate infraÂstructure for software testing.â Available online at http://www.nist.gov/ director/prog-ofc/report02-3.pdf. 21â recent NRC study examines the issue of certification and dependability of software A systems. See information on the report Software for Dependable Systems: Sufficient Evidence? at http://cstb.org/project_dependable.
SUMMARY OF WORKSHOP DISCUSSIONS 31 of software engineering processes, making it hard to achieve confidence such that developers are willing to sign off on their work. Moreover, it was argued that with current curricula, developers generally do not even learn the basics of secure coding practice. There is little to no focus on security, safety, or the possibility that the code is going to be attacked in most educational programs. It was argued that curricula need to change and that computer science graduates should be taught to âassume an enemy.â Automated tools can give better assurance to the extent that ven- dors use them in development and fix what they find. Running evaluation tools after the fact on something already in production is not optimal. 22 It was suggested that there is potential for some kind of âgoodness meterâ (a complement to the âbadness meterâ described in the next section) for tool use and effectivenessâwhat tool was used, what the tool can and cannot find, what the tool did and did not find, the amount of code cov- ered, and that tool use was verified by a third party. Software Security: Building Security In Discussions in this session focused on software security as a systems problem as opposed to an application problem. In the current state of the practice, certain attributes of software make software security a challenge: (1) connectivityâthe Internet is everywhere and most software is on it or connected to it; (2) complexityânetworked, distributed, mobile code is hard to develop; and (3) extensibilityâsystems evolve in unexpected ways and are changed on the fly. This combination of attributes also con- tributes to the rise of malicious code. Massively multiplayer online games (MMOGs) are bellwethers of things to come in terms of sophisticated attacks and exploitation of vul- nerabilities. These games experience the cutting edge of what is going on in software hacking and attacks today.23 Attacks against such games are 22â Itwas suggested that vendors should not be required to vet products against numerous tools. It was also suggested that there is a need for some sort of Common Criteria reform with mutual recognition in multiple venues, eliminating the need to meet both Common Criteria and testing requirements. Vendors, for example, want to avoid having to give gov- ernments the source code for testing, which could compromise intellectual property, and want to avoid revealing specifics on vulnerabilities (which may raise security issues and also put users of older versions of the code more at risk). Common Criteria is an international standard for computer security. Documentation for it can be found at http://www.niap- ccevs.org/cc-scheme/cc_docs/. 23â World of Warcraft, for example, was described as essentially a global information grid with approximately 6 million subscribers and half a million people playing in real time at any given time. It has its own internal market economy, as well as a significant black market economy.
32 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE also at the forefront of so-called rootkit24 technology. Examining attacks on large-scale games may be a guide to what is likely to happen in the non-game world. It was suggested that in 2006, security started to become a differentiator among commercial products. Around that time, companies began televising ads about security and explicitly offering security features in new products. Customers were more open to the idea of using multiple vendors to take advantage of diversity in features and suppliers. Security problems are complicated. There is a difference between implementation bugs such as buffer overflows or unsafe systems calls, and architectural flaws such as compartmentalization problems in design or insecure auditing. As much attention needs to be paid to looking for architectural or requirements flaws as is paid to looking for coding bugs. Although progress is being made in automation, both processes still need people in the loop. When a tool turns up bugs or flaws, it gives some indication of the âbadnessâ of the codeâa âbadness-o-meterâ of sorts. But when use of a tool does not turn up any problems, this is not an indica- tion of the âgoodnessâ of the code. Instead, one is left without much new knowledge at all. Participants emphasized that software security is not application security. Software is everywhereânot just in the applications. Software is in the operating system, the firewall, the intrusion detection system, the public key infrastructure, and so on. These are not âapplications.â Appli- cation security methods work from the outside in. They work for COTS software, require relatively little expertise, and are aimed at protecting installed software from harm and malicious code. System software secu- rity works from the inside out, with input into and analysis of design and implementation, and requires a lot of expertise. In one participantâs view, security should also be thought of as an emergent property of software, just like quality. It cannot be added on. It has to be designed in. Vendors are placing increased emphasis on security, and most customers have a group devoted to software security. It was suggested that the tools market is growing, for both application security (a market of between $60 million and $80 million) and software security (a market of about $20 million, mostly for static analysis tools). Consult- ing services, however, dwarf the tools market. One speaker described the âthree pillarsâ of software security: 24â rootkit A is a set of software tools that can allow hackers to continue to gain undetected, unauthorized access to a system following an initial, successful attack by concealing pro- cesses, files, or data from the operating system.
SUMMARY OF WORKSHOP DISCUSSIONS 33 â¢ Risk management, tied to the mission or line of business. Financial institutions such as banks and credit card consortiums are in the lead here, in part because Sarbanes-Oxley made banks recognize their software risk. â¢ Touchpoints, or best practices. The top two are code review with a tool and architectural risk analysis. â¢ Knowledge, including enterprise knowledge bases about security principles, guidelines, and rules; attach patterns; vulnerabilities; and his- torical risks. SESSION 5: enterprise scale and beyond Panelists: Werner Vogels, Amazon.com, and Alfred Spector, AZS-Services Moderator: Jim Larus The speakers at this session focused on the following topics, from the perspective of industry: â¢ What are the characteristics of successful approaches to addressing scale and uncertainty in the commercial sector, and what can the defense community learn from this experience? â¢ What are the emerging software challenges for large-scale enter- prises and interconnected enterprises? â¢ What do you see as emerging technology developments that relate to this? Life Is Not a State-Machine: The Long Road from Research to Production Discussions during this session centered on large-scale Web opera- tions, such as that of Amazon.com, and what lessons about scale and uncertainty can be drawn from them. It was argued that in some ways, software systems are similar to biological systems. Characteristics and activities such as redundancy, feedback, modularity, loose coupling, purg- ing, apoptosis (programmed cell death), spatial compartmentalization, and distributed processing are all familiar to software-intensive systems developers, and yet these terms can all be found in discussions of robust- ness in biological systems. It was suggested that there may be useful les- sons to be drawn from that analogy. Amazon.com is very large in scale and scope of operations: It has seven Web sites; more than 61 million active customer accounts and over 1.1 million active seller accounts, plus hundreds of thousands of
34 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE registered associates; over 200,000 registered Web services developers; over 12,500 employees worldwide; and more than 20 fulfillment centers worldwide. About 30 percent of Amazonâs sales are made by third-party sellers; almost half of its sales are to buyers outside the United States. On a peak shipping day in 2006, Amazon made 3.4 million shipments. Amazon.comâs technical challenges include how to manage millions of commodity systems, how to manage many very large, geographically dispersed facilities in concert, how to manage thousands of services run- ning on these systems, how to ensure that the aggregate of these services produces the desired functionality, and how to develop services that can exploit commodity computing power. It, like other companies providing similar kinds of services, faces challenges of scale and uncertainty on an hourly basis. Over the years, Amazon has undergone numerous transformationsâfrom retailer to technology provider, from single application to platform, from Web site and database to a massively distributed parallel system, from Web site to Web service, from enterprise scale to Web scale. Amazonâs approach to man- aging massive scale can be thought of as âcontrolled chaos.â It continuously uses probabilistic and chaotic techniques to monitor business patterns and how its systems are performing. As its lines of business have expanded these techniques have had to evolveâfor example, focusing on tracking customer returns as a negative metric does not work once product lines expand into clothing (people are happy to order multiple sizes, keep the one that fits, and return the rest). Amazon builds almost all of its own software because the commercial and open source infrastructure available now does not suit Amazon.comâs needs. The old technology adoption life cycle from product development to useful acceptance was between 5 and 25 years. Amazon and similar companies are trying to accelerate this cycle. However, it was suggested that for an Amazon developer to select and use a research technology is almost impossible. In research, there are too many possibilities to choose from, experiments are unrealistic compared to real life, and underly- ing assumptions are frequently too constrained. In real life, systems are unstable, parameters change and things fail continuously, perturbations and disruptions are frequent, there are always malicious actors, and fail- ures are highly correlated. In the real world, when the system fails, the mission of the organization cannot stopâit must continue.25 Often, complexity is introduced to manage uncertainty. However, there may well exist what one speaker called âconservation laws of com- plexity.â That is, in a complex interconnected system, complexity cannot 25â Examples of systems where assumptions did not match real life include the Titanic, the Tacoma Narrows bridge, and the Estonian ferry disaster.
SUMMARY OF WORKSHOP DISCUSSIONS 35 be reduced absolutely, it can only be moved around. If uncertainty is not taken into account in large scale system design, it makes adoption of the chosen technology fairly difficult. Engineers in real life are used to deal- ing with uncertainty. Assumptions are often added to make uncertainty manageable. At Amazon, the approach is to apply Occamâs razor: If there are competing systems to choose from, pick the system that has the fewest assumptions. In general, assumptions are the things that are really limit- ing and could limit the systemâs applicability to real life. Two different engineering approaches were contrasted, one with the goal of building the best possible system (the ârightâ system) whatever the cost, and the other with the more modest goal of building a smaller, less-ambitious system that works well and can evolve. The speaker char- acterized the former as being incredibly difficult, taking a long time and requiring the most sophisticated hardware. By contrast, the latter approach can be faster, it conditions users to expect less, and it can, over time, be improved to a point where performance almost matches that of the best possible system. It was also argued that traditional management does not work for complex software development, given the lack of inspection and control. Control requires determinism, which is ultimately an illusion. Amazonâs approach is to optimize team communication by reducing team size to maximum of 8-10 people (a âtwo-pizza teamâ). For larger problems, decompose the problem and reduce the size of the team needed to tackle the subproblems to a two-pizza group. If this cannot be done, it was sug- gested, than do not try to solve that problemâitâs too complicated. A general lesson that was promoted during this session was to let go of control and the notion that these systems can be controlled. Large systems cannot be controlledâthey are not deterministic. For various reasons, it is not possible to consider all the inputs. Some may not have been included in the original design; requirements may have changed; the environment may have changed. There may be new networks and/or new controllers. The problem is not one of control; it is dealing with all the interactions among all the different pieces of the system that cannot be controlled. Amazon.comâs approach is to let these systems mature incrementally, with iterative improvement to yield the desired outcome during a given time period. The Old, the Old, and the New In this sessionâs discussions, the first âoldâ was the principle of abstraction-encapsulation-reuse. Reuse is of increasing importance every- where as the sheer quantity of valuable software components continues to grow. The second âoldâ was the repeated quest (now using Web services
36 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE and increasingly sophisticated software tools) to make component reuse and integration the state of practice. Progress is being made in both of these areas, as evidenced by investment and anecdotes. The ânewâ dis- cussed was the view that highly structured, highly engineered systems may have significant limitations. Accordingly, it was argued, âsemantic integration,â more akin to Internet search, will play a more important role. There are several global integration agendas. Some involve broad societal goals such as trade, education, health care, and security. At the firm or organization level, there is supply chain integration and N to 1 integration of many stores focusing on one consumer, as in the case Ama- zon and its many partners and vendors. In addition, there is collaborative engineering, multidisciplinary R&D, and much more. Why is global integration happening? For one thing, it is now tech- nically possible, given ubiquitous networking, faster computers, new software methodologies. People, organizations, computation, and devel- opment are distributed, and networked systems are now accepted as part of life and business, along with the concomitant benefits and risks (including security risks). An emerging trend is the drive to integrate these distributed people and processes to get efficiency and cost-effective development derived from reuse. Another factor is that there are more software components to inte- grate and assemble. Pooling of the worldâs software capital stock is creat- ing heretofore unimaginably creative applications. Software is a major element of the economy. It was noted that by 2004, the amount of U.S. commercial capital stock relating to software, computer hardware, and telecommunications accounted for almost one-quarter of the total capital stock of business; about 40 percent of this is software. Softwareâs real value in the economy could even be understated because of accounting rules (depreciation), price decreases, and improvements in infrastructure and computing power. The IT agenda and societal integration reinforce each other. Core elements of computer science, such as algorithms and data struc- tures, are building blocks in the integration agenda. The field has been focusing more and more on the creation and assembly of larger, more flexible abstractions. It was suggested that if one accepts that the notion of abstraction-encapsulation-reuse is central, then it might seem that ser- vice-oriented computing is a done deal. However, the challenge is in the details: How can the benefits of the integration agenda be achieved throughout society? How are technologists and developers going to create these large abstractions and use them? When the Internet was developed, some detailsâsuch as quality of service and securityâwere left undone. Similarly, there are open chal-
SUMMARY OF WORKSHOP DISCUSSIONS 37 lenges with regard to integration and service-oriented approaches. What are the complete semantics of services? What security inheres in the ser- vice being used? What are the failure modes and dependencies? What is the architectural structure of the worldâs core IT and application services? How does it all play out over time? What is this hierarchy that occurs globally or, for the purposes of this workshop, perhaps even within DoD or within one of the branches of the military? Service-oriented computing is computing whereby one can create, flexibly deploy, manage, meter and charge for (as appropriate), secure, locate, use, and modify computer programs that define and implement well-specified functions, having some general utility (services), often recursively using other services developed and deployed across time and space, and where computing solutions can be built with a heavy reliance on these services. Progress in service-oriented computing brings together information sharing, programming methodologies, transaction process- ing, open systems approaches, distributed computing technologies, and Web technologies. There is now is a huge effort on the part of industry to develop appli- cation-level standards. In this approach, companies are presented with the definition of some structure that they need to use to interoperate with other businesses, rather than, for example, having multiple individual fiefdoms within each company develops unique customer objects. The Web services approach generally implies a set of services that can be invoked across a network. For many, Web services comprise things such as Extensible Markup Language (XML) and SOAP (a protocol for exchanging XML-based messages over computer networks) along with a variety of Web service protocols that have now been defined and are heav- ily used, developed, produced, and standardized (many in a partnership between IBM and Microsoft). Web services are on the path to full-scale, service-oriented computing; it was argued that this path can be traced back to the 1960s and the airlinesâ Sabre system, continuing through Arpanet, the Internet, and the modern World Wide Web. Web services based on abstraction-encapsulation-reuse are a new approach to applying structure-oriented engineering tradition to informa- tion technology (IT). For example, integration steps include the precise definition of function (analogous to the engineering specifications and standards for transportation system construction), architecture (analo- gous to bridge design, for example), decomposition, rigorous component production (steel beams, for example), careful assembly, and managed change control. The problem is, there may be limits to this at scale. In software, each of these integration steps is difficult in itself. Many projects are inherently multiorganizational, and rapid changes have dire conse- quences for traditional waterfall methodologies.
38 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE It was argued that âsemantic integration,â a dynamic, fuzzier inte- gration more akin to Internet search, will play a larger role in integration than more highly structured engineering of systems. Ad hoc integration is a more humble approach to service-based integration, but it is also more dynamic and interpretive. Components that are integrated may be of lower generality (not a universal object) and quality (not so well specified). Because they will be of lower generality, perhaps with dif- ferent coordinate systems, there will have to be automated impendence matching between them. Integration may take place on an intermediate service, perhaps in a browser. Businesses are increasingly focusing on this approach for the same reasons that simple approaches have always been favored. This is a core motivational component of the Web 2.0 mash-up focus. Another approach to ad hoc integration uses access to massive amounts of informationâwith no reasonable set of predefined, param- eterized interfaces, annotation and search will be used as the integration paradigm. It is likely that there will be tremendous growth in the standards needed to capitalize on the large and growing IT capital plant. There will be great variability from industry to industry and from place to place around the world, depending on the roles of the industry groups involved, differential regulations, applicable types of open source, and national interests. Partnerships between the IT industry and other indus- tries will be needed to share expertise and methodologies for creating usable standards, working with competitors, and managing intellectual property. A number of topics for service-oriented systems and semantic inte- gration research were identified, some of which overlap with traditional software system challenges. The service-oriented systems research areas and semantic integration research areas spotlighted included these: â¢ Basics.â Is there a, practical, normative general theory of consistency models? Are services just a remote procedure call invocation or a complex split between client and server? How are security and privacy to be pro- vided for the real world, particularly if one does not know what services are being called? How does one utilize parallelism? This is an increasingly important question in an era of lessening geometric clock-speed growth. â¢ Management.â With so many components and so much information hiding, how does one manage systems? How does one manage intellec- tual property? â¢ Global properties.â Can one provide scalability generally? How does one converge on universality and standards without bloat? What systems can one deploy as really universal service repositories?
SUMMARY OF WORKSHOP DISCUSSIONS 39 â¢ Economics.â What are realistic costing/charging models and implementations? â¢ Social networking.â How does one apply social networking technol- ogy to help? â¢ Ontologies of vast breadth and scale. â¢ Automated discovery and transformation. â¢ Reasoning in the control flow. â¢ Use of heuristics and redundancy. â¢ Search as a new paradigm. Complexity grows despite all that has been done in computer science. There is valuable, rewarding, and concrete work for the field of computer science in combating complexity. This area of work requires focus. It could prove as valuable as direct functional innovation. Participants identified several research areas to address complexity relevant to service-oriented systems and beyond, including: meaning, measuring, methodology, sys- tem architecture, science and technology, evolutionary systems design, and legal and cultural change.