system evolution, enabling management of future uncertainty. In this regard, architecture is the primary determiner of modularity and thus the nature and degree to which multiple design decisions can be decoupled from each other. Thus, when there are areas of likely or potential change, whether it be in system functionality, performance, infrastructure, or other areas, architecture decisions can be made to encapsulate them and so increase the extent to which the overall engineering activity is insulated from the uncertainties associated with these localized changes.3

Attention to the architecture is not limited to just the design and coding phases of software. Integrity of the architecture is maintained, often with supporting code analysis tools, throughout the software system lifecycle. This is done because a single software change at any stage, including maintenance in the latter stages of a system lifecycle, can violate the key architectural decision parameters essential for acceptable system behavior, for future evolution and enhancement, and for assurance.4 During construction of a system, the architectural perspective is essential to assessing progress and risks, and the ability to make decisions and tradeoffs among various alternatives.

As systems scale up, the extent of effort that must be devoted to architecture also scales up, and slightly more steeply so that a greater percentage of overall effort is devoted to architectural considerations.5 These include design, tradeoff analysis with respect to quality attributes from requirements, identification and analysis of precedent and related ecosystems, etc. It is noted by Boehm and Turner6 that risk and precedent drive the balance between practices appropriate for precedented systems (i.e., “plan-driven methods”) and practices appropriate for innovative systems (i.e., “agile methods”). As noted in Chapter 1, larger-scale systems most often must include both kinds of practices, especially when architectural design successfully localizes or encapsulates innovative elements and maximizes use of precedented ecosystems and infrastructures. (For further discussion of architecture, see Box 3.1.)

As also noted in Chapter 1, precedented systems are those systems whose capabilities and attributes are highly similar to those that have been produced before and therefore do not require significant software innovation. In these cases, from the standpoint of engineering risks, the most critical precedents are not of requirements, but of architecture—whenever possible, the software architecture should be well understood and derived from an analysis of previous instances of the architecture. The analysis should strongly influence the development of the software architecture for the new system, as should an understanding of the likely evolution of the involved ecosystems—and incremental evolution is characteristic of successful ecosystems. Major weapon and command-and-control systems may typically


The nature of modularity and its value to business outcomes are explored in Alan MacCormack, John Rusnak and Carliss Baldwin, 2007, “The Impact of Component Modularity on Design Evolution: Evidence from the Software Industry,” Harvard Business School Technology & Operations Mgt. Unit, Research Paper No. 08-038. Available at SSRN Last accessed August 20, 2010; and Carliss Baldwin and Kim B. Clark, 2000, Design Rules, Volume 1, The Power of Modularity, Cambridge, MA: MIT Press.


One of the first studies of the consistency of modeled architectural intent and as-built reality in a very-large-scale code base was undertaken by Gail C. Murphy, 1996, “Architecture for Evolution,” in Alexander L. Wolf, Anthony Finkelstein, George Spanoudakis, and Laura Vidal, eds., Proceedings of 2nd International Software Architecture Workshop (ISAW’96), San Francisco: ACM, pp. 83-86. Follow-up work is reported in Martin P. Robillard and Gail C. Murphy, 2003, “FEAT: A Tool for Locating, Describing, and Analyzing Concerns in Source Code,” Demonstration Session, Proceedings of the 25th International Conference on Software Engineering (ICSE’03). Portland, OR, May 2003, pp. 822-823.


See Barry Boehm, Ricardo Valerdi, Eric Honour, 2008, “The ROI of Systems Engineering: Some Quantitative Results for Software-Intensive Systems,” Systems Engineering (11)3:221-234; Mark W. Maier and Eberhardt Rechtin, 2000, The Art of Systems Architecting, 2nd Ed., Boca Raton: CRC Press; Manuel E. Sosa, Steven D. Eppinger, Craig M. Rowles, 2004, “The Misalignment of Product Architecture and Organizational Structure in Complex Product Development,” Management Science 50(12):1674-1689; Alan MacCormack, John Rusnak, and Carliss Y. Baldwin, 2006, “Exploring the Structure of Complex Software Designs: An Empirical Study of Open Source and Proprietary Code,” Management Science 52(7):1015-1030; and Manuel E. Sosa, Jürgen Mihm, and Tyson Browning, 2009, “Can We Predict the Generation of Bugs? Software Architecture and Quality in Open-Source Development,” INSEAD Working Paper 2009/45/TOM.


Barry Boehm and Richard Turner, 2003, “Using Risk to Balance Agile and Plan-Driven Methods,” Computer 36(6):57-66. Available online at Last accessed August 20, 2010.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement