have put us in the interesting position of being limited less by our inability to sense and actuate, to compute and communicate, and to fabricate and manufacture new materials, than by how well we understand, design, and control their interconnection and the resulting complexity. While component-level problems will continue to be important, systems-level problems will be even more so. Further, “components” (e.g., sensors) increasingly need to be viewed as complex systems in their own right. This “system of systems” view is coming to dominate technology at every level. It is, for example, a basic element of DOD's thinking in contexts involving the search for dominant battlefield awareness (DBA), dominant battlefield knowledge (DBK), and long-range precision strike.

At the same time, virtual reality (VR) interfaces, integrated databases, paperless and simulation-based design, virtual prototyping, distributed interactive simulation, synthetic environments, and simultaneous process/product design promise to take complex systems from concept to design. The potential of this still-nascent approach is well appreciated in the engineering and science communities, but what “it” is is not. For want of a better phrase, we refer to the general approach here as “virtual engineering” (VE). VE focuses on the role of M&S in uncertain, heterogeneous, complex, dynamical systems—as distinct from the more conventional applications of M&S. But VE, like M&S, should be viewed as a problem domain, not a solution method.

In this paper, we argue that the enormous potential of the VE vision will not be achieved without a sound theoretical and scientific basis that does not now exist. In considering how to construct such a base, we observe a unifying theme in VE: Complexity is a by-product of designing for reliable predictability in the presence of uncertainty and subject to resource limitations.

A familiar example is smart weapons, where sensors, actuators, and computers are added to counter uncertainties in atmospheric conditions, release conditions, and target movement. Thus, we add complexity (more components, each with increasing sophistication) to reduce uncertainties. But because the components must be built, tested, and then connected, we are introducing not only the potential for great benefits, but also the potential for catastrophic failures in programs and systems. Evaluating these complexity versus controllability tradeoffs is therefore very important, but also can become conceptually and computationally overwhelming.

Because of the critical role VE will play, this technology should be robust, and its strengths and limitations must be clearly understood. The goal of this paper is to discuss the basic technical issues underlying VE in a way accessible to diverse communities—ranging from scientists to policy makers and military commanders. The challenges in doing so are intrinsically difficult issues, intensely mathematical concepts, an incoherent theoretical base, and misleading popular expositions about “complexity.”



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement