Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 70
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force 6 Creating and Improving Intellectual and Technological Infrastructure for M&S KEY TECHNICAL PROBLEMS REQUIRING INVESTMENT Whereas the “content” of M&S will best be improved with research programs organized around warfare areas, there are some important cross-cutting technical problems that merit separate investment. Some involve modeling theory; some involve infrastructure technology and standardization. Three seem particularly significant in thinking about achieving long-term visions: Understanding and M&S of complex systems, Families of models, and M&S infrastructure. The first, complex systems, is discussed in Chapter 3 . ( Appendix B is a much more extensive treatment.) Here the focus is on the second and third. HIERARCHICALLY INTEGRATED FAMILIES OF MODELS The first subject involves integrated families of models. (See also Appendix E .) Having such families is important in all domains of M&S. To take merely one example, a JTF commander needs to work for the most part with a highly aggregated view of the theater and forces. However, he also needs to be able to zoom in on particular regions or operations, perhaps because they are critical and must therefore be understood in detail. As a practical matter, all this requires different models (not just a single high-resolution model) because of both complexity and data uncertainty.
OCR for page 71
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE 6.1 Old-think on model families. In one sense, families of models have been around for years, but mostly on viewgraphs. In old-think ( Figure 6.1 ), moreover, they were formed by legislating that existing models at different levels of resolution would be considered a family and that detailed models would be used to generate data calibrating less-detailed models. The results of most efforts along this line have been disappointing, if not downright failures. First, the models declared to be family members often were only casually related. Connecting them proved difficult and ambiguous. In part as a result, but also because of flawed theory, the calibration efforts failed. High-resolution models, for example, often predicted attrition and movement rates that greatly exceeded observed reality—presumably because they were not yet sufficiently complete to reflect many of the delays and other frictional effects that occur in real military operations. 1 Also, the high-resolution models often did not address key features of the problem. That is, they had insufficient scope. In other cases, the high-resolution models were credible, but the low-resolution models had no “hooks” for reflecting the high-resolution results. For example, they depended only on deterministic averages of higher-resolution phenomena when statistical or distributional information was critical. The general problem is that models that have not been designed for cross-calibration are often difficult to relate to one another. 1 As an example of the difficulties here, suppose that one wants to use a high-resolution simulation of company-level battle to calibrate the attrition rates of a higher-level model. A company, once it is in battle, may have a very short but intense period of attrition. However, most of the time such a company is not in such a battle. Further, there may be many hours of preparation before any such battle occurs. Accounting for these matters in attempting to provide “average rates” remains extremely difficult conceptually and was beyond the simulation and computational states of the art in past decades.
OCR for page 72
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE 6.2 New-think: integrated hierarchical families of models. There have also been organizational problems. If the different models of such a hierarchy are owned by different organizations that only occasionally work together, the linkages are more imagined than real, and sometimes cynically constructed when real at all. This is a harsh judgment, and there have been some notable partial successes, but the panel believes the judgment is correct. 2 Figure 6.2 suggests an image of “new-think” on these matters. Although it may appear “common-sensical,” it represents a drastically different image than the one followed in the past and assumed appropriate by most in the analytic community. In this image, models at different levels of detail are designed together from the outset so that there is a true integration. Variables from one level “understand” variables at another. Second, models at any given level are designed to make use of data from other levels of resolution. Returning to the attrition example, if we know from historical evidence (and common sense) that attrition is self-limiting because commanders will not tolerate excessive attrition, then someone building a high-resolution model may need to design in corresponding decision rules that could be calibrated against macroscopic information on behaviors (which might be different for different nations' commanders and forces). The main point here is that in building models and calibrating them we should be using all the knowledge available, regardless of resolution, and one should be attempting to make the family members consistent with each other. 2 For a review of such matters, see Davis and Hillestad (1993a,b). The latter mentions two efforts, one by the U.S. Air Force and one by the German IABG, that were reasonably successful in developing model families. Both efforts were tightly managed and were within a single organization. The fundamental difficulties in this domain are now recognized by the Defense Modeling and Simulation Office (DMSO) and DARPA.
OCR for page 73
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force One should not assume that “truth” resides at high resolution, low resolution, or indeed at any one level or in any one characterization. For example, current high-resolution, entity-level simulations often contain rich information on microscopic behaviors, but they have very limited scope and no or inadequate representation of higher-level context (e.g., the JTF commander' s objectives, strategy, and constraints). By contrast, that information may be readily seen in more aggregate representations of the same war. Connection and calibration, then, should be two-way. 3 This type of thinking is familiar in some types of engineering, but it is quite unusual in combat modeling. Unfortunately, no one today knows how to carry out the vision of “new think.” Doing so will require fundamental research as well as applied research on particular problems. There are existence proofs for such models in relatively simple cases, and the beginnings of a theoretical foundation, but there are many theoretical obstacles. 4 These matters are discussed more fully in Appendix F . Let it suffice here that operationalizing the ideas suggested in Figure 6.2 is very difficult as a matter of theory. Although there have been many claims to the effect that object-oriented programming creates hierarchies of models, such programming usually focuses on the entities (e.g., corps, division, brigade, battalion, company, platoon, squad). To be sure, such hierarchical entities can be represented more easily in object-oriented programming than older methods, but the more serious representational problems involve processes (e.g., attrition, movement, and command-and-control) rather than entities. Relating processes at different levels of aggregation or resolution is conceptually very difficult, and very few military researchers have even attempted to do so rigorously. This is a subject for serious theoretical research. One Element of Doing Better: More Ambitious High-resolution Simulations As discussed above, one of the most serious past difficulties in trying to calibrate upward has been that the high-resolution models had insufficient scope and, even within the scope dealt with, incomplete information. For example, in 3 For a simple example of the two-way calibration issue involving maneuver warfare of ground forces, see Davis (1995a), which works out the problem analytically. For other discussions of multi-resolution modeling issues, see articles by Davis and Hillestad in the edited collection of the Military Operations Research Society (MORS) by Bracken et al. (1995). 4 See Davis and Hillestad (1993a) for the report of a workshop on variable-resolution modeling sponsored by DARPA and DMSO. One concept discussed in that workshop is the notion of integrated hierarchical variable resolution (IHVR) modeling. The key point here is that if the models at different resolution of key processes such as attrition are related by hierarchies of variables, it is “straightforward” to define procedures for calibration. These must involve summations and integrals with appropriate statistical weighting for the application at hand (see also Appendix E ).
OCR for page 74
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force past decades it was not possible to have entity-level simulation extending to division and corps in scope. As a result, the high-resolution simulations focused on, for example, company- or battalion-level combat. The result was that the simulated battles occurred to some extent in a vacuum, without representing the lengthy, complicated preparations and maneuvers that typically precede the battles, much less the associated frictional complications. With increased computational power, however (and with the benefit of improved software engineering), it is now feasible to greatly expand scope. Along with doing so will surely come substantially improved ability to “see” and understand the interrelationship of events at different levels of organization. A second difficulty with most high-resolution simulations has been their failure to incorporate behavioral models representing decision making at the various echelons. Much of the high-resolution work has employed military officers for command-control, which has its own advantages, but which complicates or precludes some of the activities needed for analysis. This limitation is also being overcome, slowly, as improvements are made in so-called semiautomated forces (SAFOR). There have now been a number of model developments, notably in the United States and Germany, that have advanced the state of the art in such matters. In the decades ahead, this agent-based modeling will improve greatly—given adequate support and high enough standards. At present, many workers are pleased when the models represent stereotyped doctrinal tactics at low levels, but, with time, the models will become increasingly adaptive and will probably have “learning capability. ” 5 Currently, the emphasis on SAFOR is at low levels (e.g., company level when dealing with land forces). However, decision models are feasible for all echelons, and some have been demonstrated and even used, with various levels of success. 6 The forecast here is cautiously bullish, even though it is likely that selective human play will always be very desirable, not just to calibrate models, but to ensure the range of innovations and “unusual” behaviors. There are other potential and important improvements that should be sought in high-resolution models. These include better representation of the environment (haze, smoke, snow, sea state, and so on) and better representation of low- 5 Some aspects of “learning capability” are by no means exotic, however unusual in modeling. Consider a simulation in which the two forces have imperfect information about each other and about some of the “laws of war” (e.g., rates of attrition and movement). As they engage in operations and “observe” simulated events, they can also recalibrate some of their assumptions. Thus, if one side 's doctrine calls for fast movements and the other side's assumes slower movements, then both sides should use the simulation 's version of “truth” in making decisions after movement rates have been observed. 6 The EAGLE model, developed initially at Los Alamos National Laboratory and subsequently by TRADOC and MITRE, orginally used script-based methods from the artificial intelligence community to deal with battalion-level decisions. The CONMOD development at Lawrence Livermore Laboratory was never completed, but included extensive design work and some prototype demonstration of optionally automated large-scale high-resolution simulation. The German Armed Forces University has extensive experience with rule-based and utility-maximizing decision models informed by many years of human play. The RAND Strategy Assessment System (RSAS) of the late 1980s included theater-level and even political-level decision models employing a variety of methods that included adaptive scripts (akin to real-world branched war plans). These models were able to recognize failures and opportunities requiring changes of plan (i.e., changes of adaptive plan). The political models took a world view, had extensive situation-assessment capability, and made plausible decisions about escalation, termination, and change of high-level (theater-level or multitheater) strategy.
OCR for page 75
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force level human behavior (not “decisions” so much as human “behavior”). Much work is needed on both, although there have been notable advances. A Different Perspective: The Need for New Modeling Approaches While there are many reasons to believe that high-resolution simulations will be greatly improved in the years to come, there are also reasons to doubt that they will ever be able to generate accurate higher-level “truth” without incorporating information and constraints from higher-level (lower-resolution) information and perspectives. What may be feasible in principle (such as working from Schrodinger's equations to engineering detail) is often not feasible in practice. It is striking that approaches emphasizing “agent-based modeling” with adaptive agents following relatively simple principles and rules sometimes have the ability to “generate” remarkably realistic macroscopic behaviors, and that the same workers accomplishing this had previously worked diligently, but failed, in a more exclusive bottom-up-with-detail approach. 7 Interestingly, some of this work is neither high-resolution nor low-resolution in character, but rather something new—e.g., low-level agents with only a few characteristics and behaviors. One interesting point here is that what some communities refer to as agent-based modeling with emergent behaviors looks to others very much like what others think of as adding adaptive decision models to traditional simulations. Further, it is likely that the agent-based models built on only a few basic principles will not prove robust enough for decision support (unless the models can be validated against extensive empirical data), in which case it seems even more likely that the two approaches will to some extent converge. Working Toward a Larger Tool Kit: Models Other Than Simulations One of the peculiar features of the current discussion of M&S is that the vast majority of discussion is about simulations—so much so that it is sometimes forgotten that other powerful forms of modeling exist. Simulation generates a possible behavior over time of the real system being modeled. One sets initial 7 A good example of this was reported to the panel by Darryl Morgeson and Chris Barrett of Los Alamos National Laboratory, based on their transportation modeling work. There are numerous other examples (see also Appendix B ).
OCR for page 76
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force conditions, executes the simulation, and watches a rendition of how the systems may behave. Simulations are well suited to certain types of “what-if?” questions because one merely changes the initial conditions and runs the model again. However, simulations are often very complicated —especially entity-level simulations. They become difficult to control and comprehend. Further, they cannot answer many questions of interest to decision makers, such as “Under what conditions would I be able to . . . ?”, or “If I must achieve [some level of performance], how many . . . will I need?” 8 Yet another problem with simulations is that they are in some cases the antithesis of the reductionism that is so often critical in decision support. They are so rich that one can lose the forest for the trees. This is especially troublesome when the aggregate behavior of the system turns out to be much simpler, and much more easily understood, than one would ever imagine from studying simulation inputs or the outcomes of a few runs. Yet it happens frequently in systems that approach some kind of steady state, or in systems in which many of the complex interactions produce a simpler average behavior that can be discussed in simple terms. The current fascination for simulations and the need to rebalance this with more effort to use other forms of modeling has been discussed by Herbert Simon (1990). Versions of simulation that “go beyond ‘what-if?' questions” by logic programming methods to embed knowledge that allows the simulation to find initial conditions sufficient to meet specified end states have been discussed and pursued by RAND's Jeff Rothenberg and colleagues (see Mattock et al., 1995). Also, there may be a revival ahead of defense economics dependent less on simulations than on simpler spreadsheet-level models, cost data, and decision support tools. Within the larger technical community of universities and industry, it is notable that younger workers are increasingly expert in powerful desktop analytical tools, tools such as Mathematica and Macsyma, which accomplish symbolic manipulation as well as perform many other functions. A new tool called Analytica facilitates analytical modeling in which input parameters have associated probability distributions. It also facilitates the hierarchical modeling. 9 While the panel does not discuss such matters much in this report, it believes they merit more attention. 8 Complexity is sometimes in the eyes of the beholder. Some simulations (e.g., Janus) depend ultimately on a relatively small number of principles and data with a relatively well defined origin. Further, behavior is sometimes rather easy to understand because it so tied to physical processes. However, from an analytical viewpoint the same type of model may seem very complicated because there are so many variables, especially if one does not uncritically accept weapons-effect data, doctrinal estimates of movement, and so on. 9 Analytica is a product of Lumina Decision Systems, which licenses underlying software from Carnegie Mellon University. Mathematica is a product of Wolfram Research. Macsyma is sold by Macsyma, Inc.
OCR for page 77
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE 6.3 Rationale for M&S infrastructure. M&S INFRASTRUCTURE Rationale The next subject requiring major technical effort is infrastructure. The panel cannot do justice to this subject here (see also Appendix F ). Figure 6.3 suggests other reasons for supporting infrastructure initiatives. For example, when building a new “stand-alone” simulation, anecdote suggests that it is typical to spend roughly 75 percent of the resources on the underlying infrastructure (e.g., the tedious programming necessary for bookkeeping on entities and for creating interfaces to input/output devices such as databases and graphical displays), and only 25 percent on the specific content that motivated the simulation effort. Although no one should take these figures as precise, there seems to be a consensus on their being roughly right. It should then be no surprise that “new” simulations are often merely a reprogramming of old models, with no substantive improvements. The situation is analogous to expecting improvement in a manuscript by changing the word-processing system. Historically, there has been only relatively little reuse of model components within application classes of M&S (e.g., within the group of constructive models used by a particular organization), and extremely little sharing or reuse across boundaries such as those of Figure 6.3 (virtual simulations, war games, and live-range simulations).
OCR for page 78
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force A common infrastructure could yield substantial improvements in development time and productivity. Simulation developers could focus on the specific modules of direct interest to them and to reuse other modules as appropriate. This approach also facilitates having multiple levels of resolution, levels appropriate to the application. And it minimizes redundancy and inconsistency of simulations developed by different organizations. Layered Architecture for M&S To achieve the benefits of a shared simulation infrastructure, a clear architecture is needed. As illustrated in Figure 6.4 , this architecture must recognize and address several different layers at which simulations must operate: The computing platform layer, including the specific workstations or other processors being used to execute the simulations within a federation. The network layer, which includes the local area networks, wide area networks, and interface modules that permit the computing platforms to communicate efficiently with each other. The simulation layer, which executes various models to generate the over FIGURE 6.4 Layered architecture for M&S.
OCR for page 79
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force all simulation behavior that represents the purpose of the study, exercise, rehearsal, or test. The modeling layer, which includes repositories of models for representing battlefield tactics, weapons, sensors, communications, terrain characteristics, environmental phenomena, and so on. In many cases, new models may need to be developed for a particular application, but the adaptation and/or reuse of previously developed models should often be encouraged. The scenario layer, which includes the development of force layouts, scripts, and initial conditions relevant for the study, exercise, or rehearsal being planned. Again, it will often be necessary to develop new scenarios for a specific application, but whenever possible, adaptation and/or reuse should be encouraged. The exploratory analysis and search layer, which supports the exploratory analysis under uncertainty discussed in Appendix B , and also the automated search, where possible, of the system's design space. The collaboration layer, which electronically supports collaboration among various people involved in an M&S study, enabling them to share data, work on models, and analyze results, and so on, in a coordinated and efficient fashion. High-level Architecture The recently promulgated high-level architecture (HLA) for M&S 10 attempts to address some of the issues raised by the layered architecture just presented. HLA is concerned with simulation modularity, interoperability, and component reuse by means of a consistent conceptual approach, domain-independent infrastructure components, and a repository of previously developed simulation modules. All substantive representations of real-world phenomena are maintained inside the simulation components. HLA serves as the “plumbing” that allows the components to interact with each other. Under the HLA conceptual approach, the set of simulation components that are assembled for the purposes of an analytic study, a training exercise, or a field test is termed a “federation,” and the individual components are called “federates.” The large majority of federates are simulations, which are responsible for representing some portion of the real-world phenomena under study, but they also include such other components as data collection systems, test status moni- 10 The memorandum by Undersecretary of Defense Kaminski mandating the high-level architecture, as well as a variety of documents describing it, can be found at the Defense Modeling and Simulation Office's Web site ( http://www.dmso.mil ). Also, a glossary of M&S terms is available at http://www.dmso.mil/docslib/mspolicy/glossary.html .
OCR for page 80
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force toring devices, and controllers' consoles. The latter elements are consumers of simulation data rather than direct participants in the simulation. One of the fundamental architectural precepts of HLA is that federates interact with each other only through a run-time infrastructure (RTI) in accordance with a well-defined interface specification. The RTI is composed of a number of software modules that provide functional services to the federates. One software module is collocated with each federate. The federates communicate with each other by addressing a service request to their local RTI module and by responding to service requests presented by the local RTI module. Another precept is that the federation needs to agree on a common object model that includes the types and classes of objects represented, the attributes that represent the state of each object, and the interactions that can be generated by one object to affect the state of another. The HLA defines a format for capturing this information, called the object model template (OMT), and it requires that every simulation that is a candidate for inclusion in a federation must maintain its own simulation object model using the OMT format. In essence, the process of forming a federation consists of a negotiation regarding the various simulation object models, resulting in decisions about which parts of the various simulation object models will be combined to form the overall federation object model. Among the key elements of the federation object model is an agreement about how simulation time will be managed across the federation. The RTI interface specification defines six groups of services: Federation management services provide the basic functions required to control a particular execution of a federation, such as joining and resigning from the execution, starting, pausing, and resuming the flow of time. Declaration management services provide the functions by which individual federates convey to the RTI the classes of objects, attributes, and interactions they will represent during a given execution and the classes of objects, attributes, and interactions they need to subscribe to. Object management services provide the functions needed to create and delete specific instances of objects of various classes, and to create and delete reflections of the state of objects that are being represented by remote federates. Ownership management services provide an opportunity to transfer the responsibility for updating some or all of the attributes of an object to another federate. This permits, for example, the sharing of a high-fidelity sensor output computational capability by several federates. Time management services coordinate the advancement of time in a consistent way across the federation. Many modes of time management are supported, ranging from synchronized time steps at agreed-upon rates to negotiations of time advances of arbitrary magnitudes among event-oriented simulations. Data distribution management services provide mechanisms for coordinating data publications and subscriptions to ensure the efficient routing of data
OCR for page 81
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force only to those federates that have requested it, with minimal amounts of irrelevant data that need to be processed. The RTI is designed to insulate the individual federates from differences in implementation languages and internal data representations across the federation. Although the HLA makes extensive use of object-oriented representations to describe the interactions among federates, it does not require that any federate use object-orient programming languages or representations internally. Finally, the HLA envisions that federates and their object models will be catalogued in resource repositories where they can be browsed and selected as candidates for reuse in new federations. Resource repository data would include pointers to more detailed documentation and points of contact for those responsible for maintaining various simulation components. Although HLA makes a substantial contribution to the architecture envisioned in Figure 6.4 , it contributes mainly to the lower, technological levels rather than the higher, “intellectual” levels. Indeed, it is somewhat unfortunate that the term “high level” was employed to designate this important development. In particular, while HLA standardizes the simulation infrastructure in which models are executed in distributed fashion, it specifically does not intend to standardize the model content of simulations. However, there are many issues that still need to be addressed in the “modeling layer” of Figure 6.4 , and which, if not addressed, could sharply narrow the utility of HLA within DOD. For example, the RTI of HLA usefully contributes to the standardization of time management so that developers need not worry about this aspect of distributed simulation. However, unless there is semantic consistency among models that are being federated, their federation cannot result in a meaningful overall composite. The Department of the Navy should not only adopt HLA, but also encourage the further development of the higher, “intellectual” layers, where reuse of model content can be facilitated. Reaction to HLA General Reactions As one would expect for any endeavor with such broad potential application and impact, responses to HLA have varied widely. In some circles, it is viewed with considerable suspicion. Much of this suspicion can be attributed to misinformation and natural human tendencies toward fear of the unknown. In some cases, however, one suspects that certain individuals and organizations fear the exposure of the internals of their simulations and possible loss of control over them. In a few cases, there is fear that this approach could lead to the unwise imposition of a few “one-size-fits-all” simulation components that may not be suited for certain applications. And, of course, there is the perennial problem of
OCR for page 82
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force a reluctance to pay up-front costs in expectation of benefits that may (or may not) manifest themselves later. Despite these inevitable concerns and misgivings, HLA is being well received by those with a history of strong commitment to the goal of improved interoperability and reuse. The architecture is inherently broader and more flexible than the distributed interactive simulation (DIS) standards, where many of the advocates of simulation interoperability have historically congregated. The fact that this community voluntarily set aside its DIS standards development activities in order to adopt the HLA is a powerful testimony to the potential of the HLA concept. As was previously noted, a price must be paid for whatever progress is made. Various M&S user communities must negotiate common conceptual models and definitions that can be used across multiple federations. Decisions must be made about what legacy applications are worth the investment required to overhaul them to bring them into accordance with new standards. Realizing the benefits of improved interoperability and reuse will require the active and unequivocal support of the senior Navy Department leadership. There will undoubtedly be problems in implementing, promulgating, and institutionalizing these changes. Perhaps some aspects of the current HLA will need to change; perhaps others will be required. The sooner this process gets under way, the better. As mentioned earlier, in and out of each activity supported by M&S will be flowing not only information, but also models and data. By no means will everything be connected to everything, but substantial reuse and sharing will occur because those doing the work will benefit. And, again, there is more involved here than just model objects and databases. A key element of the M&S infrastructure is commonality of intellectual constructs. Confusion of Issues: Standards Versus Stamping Out of Variety Significantly, the panel believes that much of the resistance to HLA is probably due to a confusion of two phenomena. On the one hand, DOD is promulgating content-free standards that should facilitate the marketplace of ideas and products. On the other hand, DOD is constantly exhorting the Services to eliminate alleged redundancies. The image being conveyed is that DOD wants to converge on single models. Indeed, senior officials and military officers have often said as much, although sometimes grudgingly acknowledging that perhaps very modest redundancy (e.g., two models of the same phenomenon?) might be acceptable. That desire to converge on single models is, the panel believes, a serious mistake and quite at odds with the desire to improve the content and general quality of models. The panel will return to this issue later. Here let us merely note that some opposition to the HLA is probably due to the understandable resistance to what is seen as overstandardization of models. By contrast,
OCR for page 83
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force HLA developers intend HLA to be content neutral and to facilitate, not obstruct, the marketplace of competitive models. Let us now make the assumption that the Department of the Navy actively supports HLA and a common infrastructure. Some basic questions will still remain. These include: Who develops the simulation modules? Who verifies, validates, and accredits these modules for specific uses? Who maintains these modules once they have been developed, and updates them as the systems they represent are changed? Thus, there are many issues ahead. REPOSITORIES AND MODEL INTEGRATION It is a waste to have to reinvent the wheel each time a new car is designed. Yet as successive generations of simulations were developed in the past, such wasteful restarts from scratch were the rule rather than the exception. Nowadays, the advent of object-oriented design and programming has provided the technology to support object repositories, where objects may be reused time and time again. These matters are discussed in more detail in Appendix F . ADVANCED ENVIRONMENTS AND HIGH-LEVEL LANGUAGES FOR M&S We are all aware of how important “environments” are if we use personal computers. A good current environment allows us to move quickly among applications and transfer material from one application to another—primarily among word processing, graphics, and spreadsheet programs. Also, within each such application we have come to expect tools such as spell checkers, on-line documentation, and hand-holding multimedia primers that lead us through new operations. CADCAM technology is, of course, of great value and becoming well known. So also, then, “environments” are extremely important to both the use and the development of M&S. And high-level languages can be far superior to those pulling users down into levels of programming detail beyond what they need. The commonly used BASIC language with its interactiveness and relatively simple syntax has long been popular for programming by nonexperts. A variety of specialized high-level languages have proved quite powerful to students in science and engineering and to professionals. 11 11 Examples here include SIMSCRIPTTM and MODSIMTM simulation languages, the systems dynamics language iThinkTM, and, much less well known, the RAND-ABEL languageTM used to develop the RAND Strategy Assessment System. The programs MathematicaTM and MacsymaTM include high-level language features, as well as facilities for symbolic manipulation and other operations.
OCR for page 84
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force Advances in this domain will make it possible to improve greatly the comprehensibility of models, the traceability of results, and the testability in particular contexts. Consider as mere examples here: Advanced languages can make it easier for developers to build in “simple explanation facilities,” so that an M&S user can see not only the predicted system behavior, but also the key determinants of that behavior. For example, a log statement might say, “Because the JSTARS was inoperative and . . . the acquisition probability for moving targets is reduced by 50 percent from — to — .” Further advances in “explanation capability” will make it feasible to query the simulation about why certain events occurred or under what conditions they could occur. Such capabilities would probably depend on logic programming. Where terse “explanation log” depictions are inadequate, users should be able to ask for more information and be immediately transported to relevant features of the underlying computer code. If this code is in a high-level language, they may be able to read and understand it directly. Or, it may be that what matters are assumptions (i.e., parameter values). Again, with nothing more than a mouse click, the user should be able to see the current values of the relevant parameters—along with documentation about where the data values came from and, in some cases, why they have the value they do (e.g., “based on intelligence reports as of March 28, 2015, it is now believed that the SA-25 surface-to-air missile system as deployed in Libya has the following features : . . .” With another mouse click, the user should be able to read the original intelligence report, which might be posted on-line in DIA headquarters. If the user's problem related more to understanding relationships among variables in the underlying model, then he should be able—again with no more than a mouse click and intelligent software noting his context or querying him on the type of information sought—to see a design-level depiction of the model itself. This might take the form of data-flow diagrams, object-model hierarchies, overview text, and so forth, depending on his needs. And, of course, it should be possible for the user to “reach back” to model builders, or even to the researchers who provided the knowledge base. This might be done by e-mail, video conferencing phone calls, or a broadcast query to the relevant subset of Web users. Making complex models comprehensible remains a frontier challenge, and many workers have labored valiantly only to produce simulations understandable only to themselves. Still, there has also been considerable progress. Ironically, there have also been setbacks because of an interesting and frustrating tension between the desires for advanced features and standardization. Currently, work on advanced languages and environments relevant to military modelers seems to have slowed considerably, in large part because those building the advanced tools
OCR for page 85
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force need to use methods that are not compatible with commercial software such as Microsoft's Visual BasicTM or the many graphics standards. This is a passing phase, however, and there will again be major progress. One indication of this is the growing interest in industry-developed methodologies and tools for object-oriented modeling, not just object-oriented programming. 12 Some of these tools are now being used in the JWARS program, for example. They are especially significant because building “explanation capabilities” often depends critically on the clarity and structure of the underlying model design. A related and significant development here is the increasing recognition of the need for common models of the mission space (or CMMS). These can substantially improve the degree to which workers who wish to share each other's models will be able to communicate correctly. That is, they can help improve the semantic interoperability of models. Finally, the panel notes that commercial industry will probably not support much of what is needed by the Department of the Navy (and DOD) when thinking about comprehensibility, traceability, and the like in combat simulations embedded in command and control systems, or about the competent reuse of models available in a community repository. The incentives for doing so do not yet exist, although we expect that they will emerge in time. Thus, investment is needed. However, its success will probably depend on “squaring the circle,” that is, finding ways to incorporate advanced features such as explanation capabilities into software largely written according to emerging industry standards. RECOMMENDATIONS ON JOINT MODELS Concerns One useful focus for Department of the Navy thinking about M&S is the set of joint systems now in development (most prominently JSIMS and JWARS). Taken together, these worthy programs (including the service components) have a price tag approaching $1 billion. It is the DOD's intention that JSIMS and JWARS will become the core for all future joint work on training and analysis, respectively. If successful, JSIMS and JWARS will dominate the joint M&S scene for the next 20 years. Thus, it is important to the Department of the Navy that naval forces be adequately represented. Otherwise, valuable training opportunities will be compromised and the Navy and Marines will suffer in the competitions over doctrinal changes, future missions, and force-structure tradeoffs. More generally, the quality of joint work will suffer. Unfortunately, it is likely that first-generation versions of JSIMS and JWARS 12 See, for example, Rumbaugh et al. (1991).
OCR for page 86
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force will not be satisfactory—even with heroic efforts and even though the products will have many excellent features. There will be major shortcomings with respect to both content and performance. Consequently, the panel recommends that the Navy insist that DOD and the program offices adopt open-architecture attitudes that will promote rather than discourage substitution of improved modules as ideas arise from the research and operations communities, and that they build explicit and well-exercised mechanisms to assure that such substitutions occur. This may seem uncontroversial, and it calls for no more than what some of the programs (notably JSIMS) are projecting, but the history of DOD modeling has often been to produce relatively monolithic and inflexible programs. Further, there has been great DOD emphasis in recent years on avoiding alleged redundancies, collecting “authoritative representations,” and exercising configuration control. The panel observed widespread frustration among analysts and other substantive users of models, who see DOD's M&S efforts as driven by civilian and military managers who think models are commodities to be standardized, who sometimes seem to value standardization more highly than quality (harsh words, but too important to be omitted), and who have given near-exclusive emphasis to software technology issues. They and the panel believe M&S should instead be seen as organic, evolving, and flexible systems with no permanent shape (but with standardized infrastructure, including many component pieces). In fact, the visionary technical infrastructure being promoted by OSD's Defense Modeling and Simulation Office (DMSO) (and software technologists) will permit the open system approach and will permit competition among alternative models (e.g., alternative representations of ballistic-missile defense, mine warfare, or C4ISR). Thus, while it would be easy for JWARS, JSIMS, and other systems to end up as rigid monoliths, with the right architecture and organizational structure DOD can have its cake and eat it: it can have “standard configurations” while still making it easy for users to substitute model components as new ideas and methods emerge. An important but more subtle aspect of this visionary infrastructure is connecting model evolution to the R&D and operational communities concerned with both current and futuristic doctrine, and, significantly, nurturing a competition of ideas and models. In that way the evolution will be more like survival of the soundest than like continuation of what has previously been approved. The panel underlines the problem of incorporating research results when they exist because at present the communities who do research and the programming of models often do not communicate well and there is little pressure to assure that the “best” models are reflected in M &S. Indeed, there is much pressure to avoid changes.
OCR for page 87
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force Technical Attributes Needed in Joint Models Against this background of concerns, the panel recommends that the Navy advocate an approach to joint-model development that has a long-haul view and an associated emphasis on flexibility. The groundwork should be in current model-building efforts for the following, which will be important in selected applications in the years ahead: Multi-resolution modeling, not only of entities, but also of physical and command-control processes, with the objective of building integrated models of families with different levels of resolution. Decision modeling to represent commanders at various levels, with both realistic depictions and depictions that provide for optimal behaviors. Diverse representations of uncertainty, including use of probability distributions (and, sometimes, alternatives such as fuzzy-set concepts), even in aggregate-level models. Systematic treatment of important correlations (e.g., the “configural effects” of mine warfare and air defense) (see also Appendix J ). Explanation capabilities linking simulated behavior to situations, parameter values, rules and algorithms, and underlying conceptual models. Mixed modes of play that are interactive, selectively interruptible (e.g., for only higher-level commander decisions), and automated. (The panel regards the option for human play as critical for analytic applications as well as training, and the option of closed play, for example, of the opponent, as critical for training.) Testing of new doctrinal concepts requiring new entities, attributes, and processes. Different types of models. The systems should accommodate model types as diverse as general state-space and simple Lanchester equations, entity-level “physics-based ” models, and agent-based models with emergent behaviors. They should employ such varied tools for such uses as statistical analysis, generation of response surfaces, symbolic manipulators, inference engines, and search methods (e.g., genetic algorithms). Tailored assembly. The systems should facilitate tailored creation of models, including relatively simple M&S for specific applications. That is, one should conceive of JSIMS and JWARS as tool kits with rapid-assembly and modification mechanisms. Excessive complexity is paralyzing and obfuscatory. In some respects, the last item is the most important. Given the breakthroughs in software technology over the last two decades, it is feasible (though not easy)—and essential—for major M&S efforts to be designed for frequent adaptation, specialization, and module-by-module improvement. One should think of assembling the right model, not taking it from the shelf whole. Further, it should be possible to discard or abstract complexities irrelevant to the problem
OCR for page 88
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force at hand. Doing so runs directly counter to the common inclination to seek high resolution for everything, but tailored simplifications are crucial in applications. This is much better understood by those who have used M&S for studies or exercises than by those who develop software. This said, even analysts often find themselves using more cumbersome models than are truly suitable for their purposes. For example, they may use a complex campaign model to examine tradeoffs among deep-strike weapon systems being assessed for their ability to halt advancing armies. Arguably, it would be better to do most of the work with a more specialized and much simpler model with which one could do exploratory analysis. Finally, a word of caution about the concept of assembly or composition. It is common, in the heady days in which there are more notions and viewgraphs than demonstrated capabilities, for developers to talk loosely of building systems that will be so flexible as to serve quite different functions for distinctly different communities of users. In practice, such visions have seldom proved out. Instead, the systems become so complex—in their effort to serve many user communities—that they do nothing well and working with them becomes difficult and unpleasant. The panel's view is that while designing with an assembly perspective is essential, there are limits to what can be accomplished. Specializations will continue to be needed. It is an open question whether systems like JWARS and JSIMS will prove as versatile as some of the extravagant visions anticipate; DOD's image of their being general-purpose tools may prove wrong. This means that the Department of the Navy (and DOD) should hedge their bets in this regard. 13 One way to do so is to develop stand-alone models for specialized purposes, although perhaps requiring interoperability and the potential for being used within the “big” systems. RECOMMENDATIONS FOR RESEARCH Research in Key Warfare Areas As noted earlier, there has been relatively little recent investment in understanding the phenomenology of military operations at the mission and operational levels. Much of the basis for related M& S is still programmer hypothesis and qualitative opinions expressed by subject matter experts. This has not always been so. During and after World War II, operations research worked from a rich empirical base, but now the United States is entering a period of nonlinear, parallel, information-era warfare for which the intuition of scientists, operations researchers, and warriors is insufficient. Further, it will be relying on complex 13 The same observation is made within the realm of software. See, for example, Gibbs (1997).
OCR for page 89
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force FIGURE 6.5 Using exercises as a source of empirical data for M&S. SOURCE: Reprinted, by permission, from Davis (1995b). Copyright 1995 by IEEE. systems working as designed in multifaceted joint campaigns. Success may be much less tolerant of errors in concept and execution than in days past. Subjects of particular importance for M&S-related research in the information era are (1) aspects of command, control, communication, computers, intelligence, surveillance, and reconnaissance (C4ISR) that involve the content and reliability of information, as well as its transmission; (2) tactics and strategy; (3) human behavior; and (4) the very nature of the extended battlefield in future operations. This list, however, is abstract, and research could easily be disjointed. The panel recommends that the Navy and Marines select a few high-priority warfare areas and create research programs to support them. These programs should be organized so as to assure close ties to operational and doctrinal-development communities, and to relevant training and exercise efforts that could be mined as a source of empirical knowledge (e.g., as suggested in Figure 6.5 , which would exploit emerging capabilities for distributed interactive simulation). 14 This is a nontrivial and potentially controversial suggestion, since the long-standing tradition has been to avoid—and even prohibit—extensive data collection for use beyond those being trained. The costs of such efforts would be small in comparison with those for buying and operating forces, or even procuring large models. Although the Department of the Navy (and DOD) need to make up for past failures to invest adequately in research, this is a domain in which a total of $20 million to $30 million per year can accomplish a great deal. As a first list of warfare areas for focused research, the panel recommends the following, which have some overlaps: 14 Exercises, of course, are another form of simulation—not the “real thing.”
OCR for page 90
Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st-Century Force Expeditionary warfare and littoral operations, Joint task force operations with dispersed forces, Long-range precision strike against forces employing countermeasures, Theater-missile defense, including counterforce and speed-of-light weapon options, against very large ballistic-missile and cruise-missile threats, and Short-notice early-entry operations with opposition. Each of the above warfare areas has major knowledge gaps that could be narrowed by empirical and theoretical research closely tied to the “warrior communities.” The report describes key attributes of research programs for such warfare areas. An overarching theme is the need to take a holistic approach rather than one based exclusively on either top-down or bottom-up ideas. A second theme is that the research should be seen as focused military science, not model building per se. This will determine the type and range of people involved, and also the depth of the work. Two examples may be useful here. The first is the challenge of developing command-control concepts for highly dispersed Marine Corps forces operating in small units far from their ship-based support and dependent on a constellation of joint systems. The Marine Corps is studying alternative concepts in the Hunter/ Warrior experiments. Such experiments need to be accompanied by systematic research and modeling of different types, perhaps including new types of modeling useful in breaking old mind-sets. It is plausible, for example, that cellular-automata models could help illuminate behaviors of dispersed forces with varying command-control concepts ranging from centralized top-down control to decentralized control based on mission orders. To its great credit, the Marine Corps is currently exploring such possibilities, opting to accept some “hype and smoke” in the realm of controversial complex-system research in exchange for new perspectives and tools useful in doctrinal innovation. While the panel does not believe such simplified models will prove adequate in the long run, they can be very helpful in developing new hypotheses. A Navy example involves mine and countermine warfare. From prior research based on sophisticated probabilistic modeling accounting for numerous “configural effects” (i.e., effects of temporal and spatial correlations), we know that effective strategies for mine-laying or penetrating minefields are often counterintuitive. By exercising such models and simulation-based alternatives in an exploratory manner (as distinct from answering specific questions), it should be possible to develop decision aids of great value in training, acquisition, and operations. Such aids should not, however, focus only on “best estimate” single-number predictions; they should instead provide commanders with information about odds of success, as a function of information. If the aids are to be useful, they must be informed by an intimate understanding of operational commanders' needs.
Representative terms from entire chapter: