Bernard Zeigler, University of Arizona
Paul K. Davis, RAND and the RAND Graduate School
It is a waste to have to reinvent the wheel each time a new car is designed. Yet as successive generations of simulations were developed in the past, such wasteful restarts from scratch were the rule rather than the exception. Nowadays, the advent of object-oriented design and programming has provided the technology to support object repositories, where objects may be reused time and time again. Models are stored in a database called a model base. Suppose that we undertake a project to construct a new model for given objectives. Then models that can serve as components for the new model are retrieved from the model base. Then to synthesize or assemble the new model, the components must be coupled together appropriately. When validated, verified, or otherwise properly accredited, the new model is stored in the model base so that it can be reused in the future (See Figure F.1 and Zeigler, 1990, for more details). Unfortunately, this scenario is easier to describe than to bring into common practice. Some of the issues that arise are as follows:
How can a modeler discover models that are relevant to project objectives?
How can models be designed so that they can not only serve their current purposes but also anticipate future needs?
How can models be decomposed so that their components can be placed in the model base and recoupled later in different configurations (recall that models employed in dispersed geographic simulations can be distributed over computers in many locations, compounding the problem)?
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force F Model Repositories and Assembly and Integration of Models Bernard Zeigler, University of Arizona Paul K. Davis, RAND and the RAND Graduate School BASIC CONCEPTS It is a waste to have to reinvent the wheel each time a new car is designed. Yet as successive generations of simulations were developed in the past, such wasteful restarts from scratch were the rule rather than the exception. Nowadays, the advent of object-oriented design and programming has provided the technology to support object repositories, where objects may be reused time and time again. Models are stored in a database called a model base. Suppose that we undertake a project to construct a new model for given objectives. Then models that can serve as components for the new model are retrieved from the model base. Then to synthesize or assemble the new model, the components must be coupled together appropriately. When validated, verified, or otherwise properly accredited, the new model is stored in the model base so that it can be reused in the future (See Figure F.1 and Zeigler, 1990, for more details). Unfortunately, this scenario is easier to describe than to bring into common practice. Some of the issues that arise are as follows: How can a modeler discover models that are relevant to project objectives? How can models be designed so that they can not only serve their current purposes but also anticipate future needs? How can models be decomposed so that their components can be placed in the model base and recoupled later in different configurations (recall that models employed in dispersed geographic simulations can be distributed over computers in many locations, compounding the problem)?
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE F.1 Repository model base concept. None of these problems is easily solved, but the modeling and simulation (M&S) framework provides some starting points: Cataloging elements of the model base by type, application, and case. Analysts and other users of M&S have long reused particular model versions and database versions. This is often referred to as using existing “scenarios,” although that is an unfortunate use of the term scenario. However, the number of variations available, understood, and stored has typically been quite small (1 to 10, say, rather than hundreds). Further, it has typically been difficult to modify any of these stored models, in part because they have often been developed tediously so as to generate a particular “scripted behavior” involving large numbers of interacting entities and processes, which means that “small” changes can have repercussions throughout. In the future, much more should be possible. Hierarchical modular model construction. To be reusable, models must be self-contained with input-output ports as we have assumed in the system specification hierarchy. The model resulting from the coupling of its components must also be modular in this sense so that it too can be used as a component in larger models. Building block components for application domains. With some foresight it may be possible to design components from which a wide variety of models can be synthesized for a particular application domain. Thus, rather than focus entirely on the models needed for the particular project, model designers “regress” to a lower layer and search for good “primitives” to span the application domain. Coupling templates. Going hand-in-hand with the building blocks are standardized means to couple them together. The blocks must be designed to have the input and output ports that can be coupled together as assumed by the templates.
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force Reusability has obvious benefits in terms of millions of dollars potentially saved through faster project completions, and more reliable results with reduced manpower. Nevertheless, repository-based M& S has its costs in terms of specific design and maintenance requirements, as suggested above. Since these extra activities are not required for any particular project, they are likely to be considered a burdensome overhead for each such project. Given limited time and resources, a manager may be much more interested in completing the current project successfully than in laying the basis for the successful completion of future projects. However, an organization should adopt a long-term perspective in which the extra overhead incurred, especially in the first few projects, is traded off against the tremendous benefits that may accrue to future projects. In the context of advanced distributed simulation, multiple organizations may be involved in model development. The added complexity associated with coordinating individual efforts may greatly increase the difficulties in achieving reusability, while at the same time increasing the payoffs in doing so. Models developed from systems concepts have identified input and output ports that enable them to be coupled together to form larger aggregates. However, models developed before object-oriented concepts took hold may be valuable, and it might be cost effective to reuse them as well. The hurdles in trying to salvage such legacy models (e.g., TACWAR and EADSIMS) are formidable. The problems in trying to interoperate or integrate a collection of such models arise from these complications: They may have been developed for disparate objectives, often not clearly stated. They may have made various assumptions, often undocumented, and possibly inconsistent. They may be built with varying levels of detail (resolution and scope). They may be implemented in disparate coded forms (languages, operating systems, and so on). Worse still, the experimental frame and simulation features may be tightly entangled with the model per se. In contrast to the forward design of reusable object-oriented repositories, the backward retrofitting of legacy models may entail more cost than benefit. Sometimes it is possible to “wrap” a legacy model within an object interface so that it can properly interact with other objects. However, the prevalence of the above-mentioned problems may be so large as to make the effectiveness of such wrapping highly questionable. A more tractable integration may be possible where the outputs of models are not fed to inputs of other models but instead are employed to initialize their states or parameters. In this case, the models do not constitute components in a larger coupled model and do not have to meet the stronger requirements for consistent time advance and input-output compatibility.
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force DESIGNING FOR ASSEMBLY OF APPLICATION-SPECIFIC MODELS In the discussion above we emphasized the synthesis or assembly of application-specific models from components. This may seem to be a straightforward suggestion, but it is distinctly at odds with traditional practice. Most existing large-scale DOD models of which we are aware were designed as a whole and are essentially monoliths. A few of the better-designed models have knobs and switches allowing some features to be turned off and on, allowing a run-time choice between high- or low-resolution depictions, but these are exceptions, and, even in these models, other complicated features are built in or interconnected in complex ways. The result has been that large and complex models have been used repeatedly for analysis that should logically have been done with much narrower models with fewer degrees of freedom. The old adage taught to all competent analysts is that a model should be as simple as possible, but as complicated as necessary. While the adage is widely given lip service, it is routine for it to be ignored by dyed-in-the-wool modelers and simulators, and even analysts who should know better, or who do know better but are stuck with monolithic tools. Why is this so important? The answer is that good analysis depends on one or a very few minds completely comprehending what is being done. That in turn requires limiting complexity unless for some reason one can be confident that the various model components—and their data—are reliable. It would not be so bad if the large models' results depended on only a few uncertain variables, but the reality is that they may be sensitive to dozens, hundreds, or even thousands of uncertain data items of a large model. Some of the data for “peripheral aspects” of the problem may have been carefully established for different studies with different contexts, but may be quite wrong for the current study. But their inappropriateness may be difficult to uncover, and may insidiously corrupt the results. 1 Yet another reason for simplifying is that analysts must understand what they are assuming and what they are varying if they are to draw valid conclusions. Understanding the implications of large numbers of data assumptions is often impossible in practice. This seems unlikely to change unless model families are developed successfully. For all these reasons and more, then, it is desirable for M&S to be designed for assembly. It can greatly improve reusability, quality, and controllability. Only a decade or so ago, it was extremely difficult to design for such features. 1 As one example here, one might establish data values for many aspects of logistics if one were attempting to depict a best-estimate version of a particular war. In a subsequent study trading off alternative future forces and weapons, the outcomes might be strongly affected by the carryover data (e.g., one force might do poorly because it runs out of weapons or fuel, or is assumed to stop for a slow logistics tail) when the analysts are implicitly assuming that the future forces would be accompanied by suitable logistics. Such problems are common and insidious in monolithic systems.
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force That is no longer a limiting factor so long as maintenance can keep up with changes, such as those in operating systems and input-output programs. Unfortunately for this story, the vision we are describing is much more suitable for high-quality (and highly paid analysts and M& Sers) than for “average” personnel, or even highly talented personnel with only short tours in a given position (a common problem for uniformed officers). Commercial desktop software may provide a familiar analogy. Desktop publishing software is highly flexible. People with desktop publishing skills can make almost anything happen, including changing page size, font, and orientation and importing graphics from many different authors and graphics programs. For most professionals, however, even highly educated and computer-literate “knowledge workers,” there is value in having a stable, no-surprises software setup for text and viewgraphs, even if it lacks some desirable flexibilities. If models are used routinely for the same tasks, then their users will also want stability, but if they are often used to examine new methods or systems, or for diversity of purposes, modularity and assembly will be critical. EXAMPLES FROM A 1980s-ERA SYSTEM Many of the points made abstractly above can be illustrated in the history (both good and bad) of a major 1980s analytic war game, the RAND Strategy Assessment System (RSAS). 2 The RSAS was a global analytic war gaming system. It could represent joint warfare in multiple theaters, even the “intercontinental theater ” of global nuclear war. However, it was designed with the intention of serving many purposes and being as flexible as possible. Submodels were developed for air, land, and sea operations, as well as strategic mobility. These were building-block models. Other building blocks were decision models representing behavior of theater commanders and top-level military and political anthorities. 3 The theater-commander models took the form of alternative adaptive war plans such as rigid defense at the inner-German border versus a defense strategy that permitted early fallbacks to the Weser-Lech “line” if necessary. Warsaw Pact strategies varied with respect to the sectors of concentration, the use of the Austrian corridor (a 2 The RSAS no longer exists. After the disintegration of the Soviet Union, there was very little support for continued maintenance and upgrade. Further, the existing software became outdated as new operating systems and commercial graphics emerged. For these and other reasons, many features of the RSAS slipped into archives. However, a stripped-down and improved version of the warfighting models was developed and named the Joint Integrated Contingency Model (JICM). It is now being used, along with other legacy systems, for operational- and theater-level work by RAND, OSD (PA&E), the Air Staff, and the war colleges. 3 These decision models amounted to “agent-based modeling” to use the current vernacular. Indeed, they were called Red and Blue agents because of the links to concepts in the artificial intelligence community.
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force high-risk, high-payoff strategy), and the use of airpower. Both sides ' plans included nuclear options and adaptations to the other side 's nuclear use. Particular instantiations of the RSAS were created for particular theaters, notably Europe's Central Region and, to a lesser degree, Southwest Asia and Korea, and the “theater” of intercontinental nuclear war. These were constructed with relatively specific purposes in mind, for example, (1) evaluation of alternative force structures (e.g., to support analysis in support of the Conventional Forces in Europe negotiations), (2) characterization of the military balances, (3) evaluation of alternative strategies for theater- and global-level force employment, and, importantly, (4) support of joint war games at the various war colleges and National Defense University. These instantiations, once created, were then used repeatedly. In any given application, however, there were many “coupling problems” to deal with. For example, the political-level models might choose to escalate as a function of the opponent's “level of conflict” on an escalation ladder. However, the analyst had to specify how the simulation would translate physical events such as the number, location, and time of nuclear detonations to “level of conflict.” As another example, the two sides' theater-level decision models had to be given alternative adaptive war plans to choose among. Typically, some of these plans were built specifically for the given study. Each such plan and the decision rules for adapting or changing the plans typically involved some variables that had to be specified by the analyst (e.g., variables related to complex political judgments and associated military constraints). When the strategic mobility model was used, raw data on the capacity of various type aircraft for various type loads had to be translated into the terms used by the model. And, at the tactical level of combat, offline studies (or expert discussions) had to translate the complexities of sortie generation, C4ISR, and weapon delivery into average kills per sortie for a type situation. The point here is that a great deal of the system was indeed reusable and modular, but a good deal of expert tailoring was almost always required for competent use. Precisely the same situation exists with theater-level combat models in extensive use throughout the DOD (e.g., CEM, TACWAR, Thunder, and JICM). Significantly, while the developers of the various current models understood the desirability of reusability, it was not feasible technically to imagine large-scale reusability across research organizations. Instead, even with the relatively modern RSAS and JICM, transferability and confederation with other models are quite difficult because of peculiarities associated with, for example, representation of geography, the operating system, and many other factors. Technical problems such as multiple changes in Unix operating systems and the diminution of support after the collapse of the Soviet Union led to major features of the RSAS going into the archives. In the future we can at least aspire toward much greater transferability and reuse because of the standards being created (e.g., the HLA). It is plausible and
OCR for page 204
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force even likely that object-oriented programming and modular designs consistent with the HLA will make it possible for future systems akin to the RSAS to have long useful lives. This, indeed, is what is hoped for in the JWARS effort. Whether that is achieved depends on the intensity of devotion to keeping the JWARS effort an “open architecture” that can readily accommodate alternative modules and, thus, evolve if newer and better representations emerge of important objects or processes. The panel's experience has consistently been that day-to-day and economic pressures are almost always in favor of relatively monolithic, not extremely modular, constructions. The reasons are apparent to anyone who has built computer programs with more concern about speed of completion, run-time speed, and “straightforwardness ” than about expandability, reuse, modifiability, and so on. This has not changed. Another factor is DOD's frequent emphasis on agreed databases and configurational control, sometimes at the expense of quality. The Department of the Navy should establish a continuing policy of arguing for the modular assembly-oriented features of JWARS and JSIMS, and increase the emphasis on such matters in more Navy-and Marine-specific models like NSS. CAUTIONS ABOUT CROSS-ORGANIZATIONAL M&S AND ONE-SYSTEM CONCEPTS Despite the theoretical and practical strength of modern model-building concepts and technology, we note that it is an unproved hypothesis that such reusability will be meaningful and sufficiently low-risk to be used in distributed analysis. It would not be surprising if cross-organization model confederations used in distributed simulation 20 years from now were as untrustworthy and impenetrable as large monolithic models are today—when used for tradeoff analysis and other complex tasks. On the other hand, model confederations have already proved useful, for both training and analysis, in a variety of situations. 4 Generalizations are dangerous, and much depends on how DOD manages its M&S in the years ahead. Another caution is that building-block approaches have their limitations. There are costs associated with having a system with too many choices, building blocks, and features. In principle, such a system may be able to serve many different masters, with each assembling the system they need, but in practice the system may be difficult to comprehend and ponderous—especially when attempting to serve applications across domains with different concepts, purposes, terminology, and measures of effectiveness (e.g., training, test-and-evaluation, and force planning). As a result, there will continue to be demands for specialized systems with only moderate flexibility. The one-system-serves-all concept should be viewed with considerable suspicion. 4 Examples of successful use of confederations were given in a recent minisymposium (MORS, 1997). See, for example, the paper by Kent Pickett of the Army's TRADOC.